Marine: A Video Game Review

Marines: Modern Urban Combat

Let’s just get one thing out in the open: I pre-ordered this game. I half wonder if I was the only one in the world to do so.

I was on amazon ordering Modern Warfare 2 Mobilized when this game, called Marines, came up. The game, listed as Marines: Assault on Terror, had a box cover that looked like it was modeled after… well Call of Duty Modern Warfare.

I laughed, the game was obviously trying to sneak in under the Modern Warfare 2 radar and get picked up by some unsuspecting parents. Still, i was curious and looked it up and did some reading.

The game is not new. It is a port of an XBox game released in 2005 called Close Combat: First to Fight. The developers met with actual Marines, first it seems to make a training simulator, but then also to make this game.

So I looked up THAT game. Well, Gamespot gave it a 7.3, IGN gave it an 8.0 overall. Not too shabby for a FPS, I thought. The game was going for $30, and I thought what the hell. So I ordered it with MW: Mobilized.

That was back in November. The game was supposed to be out the week after Thanksgiving. Then the first week of December. Then the second. The release date changed at least five times before it finally shipped to me. It arrived today, 28 January, 2010.

So that is how I came in to possession of Marines: Modern Urban Combat, Assault on Terror, Close Combat: FIrst to Wii.

(The game is called Marines: Modern Urban Combat, which is what the box cover art said all along, but not the Amazon listing.)

Now on to the review.

Another confession: I really wanted to love this game.

Marines MUC takes place in an alternate reality (history?) Beirut. A rebel faction has started a civil war and we, the USA, are there to try and stop it. The plot is told through mock news reports from INN, the International News Network.

The game is a squad based game. You and three of your Marine friends travel together through each level. You start by clearing streets, then move to buildings and what not. As squad leader you can give commands to your group, but they are pretty basic.

The controls are pretty standard for Wii FPS. There is no jump, but you can kneel and lie prone. Reload has you move the Wii controller down, then back up. Being an FPS this is an awkward motion, as it takes the camera with it.

Remember this game is an XBox port, and it looks like it. The graphics have not been updated, which isn’t bad, but obviously last generation. But that isn’t the problem. The problem is the refresh / redraw rate. There were times as I turned that noticeable parts of the screen were not keeping up.

And load times. I mention this was an XBox port? I haven’t had load times like this in years. It makes you appreciate how Metriod and some other games will work past the loads as much as possible.

The AI of the game is more impressive than the graphics.The bad guys will hide behind things and even wait for you to move before they shoot. It is not a run and gun, which is admittedly my preferred tactic for these sort of situations. Your squad mates are actually useful, rather than being mobile targets you have to keep alive. The formations they will take are very realistic, and although they will kill baddies, I’ve noticed at times their aim is downright pitiful.

Not that I am one to talk.

The Wii added controls are just not as fluid as I am used to. You can adjust both the dead space and the controller sensitivity. I played with both through the time I was playing, but couldn’t find a good medium. Perhaps there is one and I haven’t found it yet. As for now, it is hard to aim with precision with any sort of urgency. Enemies that were fairly close to me could be hard to hit. Reaction time of the controls was slow and seemingly inconsistent. Using the scope helped some, but slowed down the motion even more.

The thing about this game is that I could see that 7.3 – 8.0 game in it as I was playing. There is a good game in this and it deserved more than a speedy port to cash in on Modern Warfare 2. Rather a slower polish, and applying the same attention to details that brought the original to this one would have resulted in a fairly strong title, even with the XBox graphics.

Instead it is a shadow. Playable, yes, but not what it could be. I’d drop this from an 8.0 to a 5. The game did not start off life as shovelware, it didn’t need to be reincarnated as it.

Life After the Word Processor, part two

Ok, so in that previous post I talked how I was dealing with storing information. Because let’s face it, even if I wasn’t a writer, there is a lot of it out there. And usually isn’t very well organized.

But what of these programs for writers? You’ve seen them in the stores: WRITE YOUR NOVEL NOW! and has some author you’ve never heard of saying that they’d not have been able to do anything without Program X.

Now why would I need program X, I’d always say, when I have a word processor? What could it do beyond, you know, typing?

Then I got an email from Mariner Software about Storymill, which is their program X. Since I have and like Mac Journal (using it now to write this entry!), I went and took a look. Now my thoughts were, what can this thing do that Mac Journal or NeoOffice can’t?

The short version is nothing. My word processor and some folders can do anything Storymill can do. My word processor and Mac Journal can do anything Storymill can do. Hell, text edit and some proper file names can do all of this. So why did I find myself drawn again and again to Storymill?

Presentation and packaging which has to be the software equivalent of “Location, Location, Location.”

Storymill provides a single user interface for writing of scenes, which are then grouped into chapters, of character bios, place descriptions, even outside research. All of these can be tagged, marked and labeled with ‘1st draft“, ”final draft“ etc.

This organization allows you to have all your information right there in front of you. ”Now what color was that dudes hair?“ We’ve been there before. You just click on the characters tab and find him and there he is. Scenes can be marked with who is in them, so you can at a glance see which characters are in which scene.

The scenes are then put into chapters. From there you can read through in the chapter the scenes. This makes moving scenes around easier. Decide you want to talk more about the good guys in the coffee shop before you show the bad guy again? just drag the scene order to how you want it.

There is a timeline feature which lets you tag scenes with a specific date and time. Then you can seem then laid out on the time line arranged by character storyline. This will help keep you from having a character in two places at once.

It even has a ‘progress’ meter. Say you have a daily goal of 1000 words, or writing for 20 minutes? You put that in, and the meter up top will let you know when you get to your goal.

My only wish is that it worked better with Mac Journal. I already start small ideas, even have written a short story or two in MJ. It would be nice if i could link say a research entry in Storymill with a journal entry in Mac Journal so it was updated from either program.

Is this better than NeoOffice? in the end it is all about how you work, how you use these things. In the end, the words on the screen are the important part, now how many bells your software has (well, unless you are writing software, but that is another post I suppose).

I can see myself using StoryMill to write and organize, but at the same time it falls into the previous entry’s issue with too many programs taking notes. In the end I still need to be properly organized with my information so that it can be found. (hence wishing it linked up with MJ)

Bittorrent File System

This is a paper to discuss an idea for a distributed cloud computing system. This system would use nodes to distribute and hold the data that are non co-located.

From the front end, the system would be identical to any other cloud computing solution. The user could make calls to retrieve or store data from the web (or other applications).

On the back end, the system would differ from a standard cloud system in that instead of being an array of centralized servers, the system would use a P2P method of distributing the data and loads throughout itself.

Let’s take the example of incoming data.

Data is sent to the cloud. This data is then processed by the RAID software and divided as required (RAID 5, 7, ETC). The distributed part then takes each part of that raid split, call them bits (not those bits) for this paper.

Now each bit will be distributed to multiple clients via a bittorrent like P2P system. One bit would then be copied on 2, 4, 8 nodes (however many deeded needed for reliability).

Then when the data is called, the raid will call the data just as it would normally. However, the system will retrieve the data in this bittorrent fashion by calling it from the available nodes. Nodes that have dropped out or are slower will be tagged and recorded, the data will be pulled from the better sources.

Once assembled by the bittorrent, the bit will be presented back to the RAID as desired. The RAID will assemble the file as required and present it back to the web request.

As stated, the nodes themselves will be tagged and recorded for performance when needed. Highly reliable nodes will be called on more frequently than lower reliable drives. The system would ensure that bits lived on a certain percentage of higher reliability than on the lower reliability. The bittorrent client would use this data to shift bit data onto different nodes, populating data as required.

Lower reliability nodes are not completely useless. They can be used to help with this repopulation as well as for storage of lower requested data. This logic would be based on available cloud space, amount of traffic, even peak times and peak availability.

Now the question is: Why? Why divide this system up into all of these nodes and introduce another step in the process over the current system? The answer is that this is not the current system. This is where the ‘distributed’ part comes in.

For the nodes we create a client program. This is installed on a computer and allows for configuration of amount of space, location, etc. Then this computer is now a node. So instead of setting up a huge server farm, node software can be installed on multiple computers spanning anywhere there is an internet connection.

Imagine if all the computers in a college computer lab donated just a single gig of space to being a node. Then maybe all of the computers in an Apple store or a Best Buy. A PS3 or XBox client could be made to contribute. Even an iPhone or Blackberry client offering as little as 10megs of space could be used.

This would significantly reduce outside influences on data availability. Things like power outages, natural disasters, even high traffic load due to sporting events could be worked around by spreading the data geographically both on an individual node level and on a node collection level.

The space would be tallied, prioritized by various parameters and prepared for use.

Security will be a concern of people using this. How do I know that my data is safe out there on someone’s personal PC? First only a small part of any one file would be located on any one node. The next upstream information available to the user is the bittorrent client which only knows where other copies of the bits are. The user would have to go up an additional step into the RAID and then back down through the bittorrents to find a usable chunk of any one file.

This is the same argument for the security of the node provider as well. For example, should a pirated movie be uploaded to the cloud, the nodes themselves would get parts so small that none of them would have enough for a reasonable argument that it was known to be there.

Extra security could be imposed by making the node into an encrypted image. This would further ensure the security, but may have negative impact on the speed of the node. This would need to be investigated.

This distributed cloud computing allows for a more robust system by decentralizing the hardware as well as allowing for expandability beyond boundaries such as building size and electrical power. It would take the one last part of open source cloud computing, the cloud itself, and allow it to be open as well.

A system such as this could be used in a grand scale, one large cloud, or in smaller forms, several small clouds that could be specialized. Just like bittorrent itself, there could be multiple gateways (torrent trackers, or cloud cars?) into the cloud.