How to use multiple views in Notepad++

May 19, 2012

Good editors always have some support for showing multiple editor windows. This is supported in many ways. See this for a comparison of the available document interfaces.

Multiple Views
NotePad++
I was happily surprised to find that Notepad++ also supports this. The trick is to right click on an edit window’s tab and choose “Move to other view” or “Clone to other view”.

Eclipse IDE
By the way, Eclipse IDE also supports multiple docking of edit views. You can even open two views into the same file. However, Notepad++ allows you to synchronize this. Which, afaik, Eclipse does not.

Multiple instances
While the above is great, sometimes you want two Notepad++ instances to, for example, open them each in their own display monitor. For that you just have to add to the notepad exe startup: -multiInst. For example on my Windows 7 instance my Notepad++ shortcut has:
“C:\Program Files (x86)\Notepad++\notepad++.exe” -multiInst

References
Comparison of document interfaces
Managing two views in a single Notepad++ instance


Show hidden windows utility on PC

March 4, 2012

Sometimes a program may crash or show deranged behavior and one of the dialogues cannot be accessed. This just happened to me. A program looks like it is stuck in a thread issue. When I look on its thread list it seems to be waiting for user input. There is no dialog visible. Clicking on the program’s window just gives the waiting mouse pointer. Sure you can just kill the program but this may not help you find out what the problem really is.

I remember years ago I used a utility that could show these hidden windows. Good luck searching on web for something like “show hidden windows” or better phrases. You won’t find it. Well, yea, you could. I did. But, I’m good, smirk.

Note that this utility is pretty basic, just shows a list of window objects. Some of them should not be unhidden, they will lock the utility itself or cause other problems. Perhaps there is something out there that is better? Seems this should be part of the Sysinternals utilities.

Links

  1. Unhider
  2. Windows Sysinternals

Use SED to print Windows path, split with line feeds

December 17, 2011

Yea, this is easy, IF you use SED much.

With cygwin installed, SED, the stream editor is available. In a command shell, execute:

set path | sed s_;_;\n_g

Explanation

  1. set path will print the Windows path. File path entries are separated by “;”.
  2. sed will invoke the cygwin installed linux SED command. Cygwin\bin is part of the executable path.
  3. “s” indicates the substitute command
  4. “_”, the underscore is used as the delimiter to each part of the substitution.
  5. “;” is the regular expression to use for a match.
  6. “;\n” is the string to substitute with.
  7. “g” is the substitute flag, global replacement

No doubt there are more direct ways of doing this. Using PowerShell would be the most appropriate if it is available.

Example
If your path is: C:\fee;d:\fi;c:\foo;\fum

The result would be:
C:\fee;
d:\fi;
c:\foo;
\fum

References

  1. Copy of this post on new blog
  2. GNU sed
  3. Sed – An Introduction and Tutorial by Bruce Barnett
  4. SED, stream editor

 


David Darling – Cycle One: Namaste!


alarm clock windows media player via PowerShell

August 27, 2011

The modern personal computer is so powerful, yet using it as a alarm clock is not so easy. There are a few commercial and free “PC Alarm Clocks” available, so all is not lost.

But, there is already a media player in Windows and a task scheduler, so it should be a piece of cake to schedule Windows Media Player to play a song or playlist every morning.

One way not to do it is to just schedule Media Player to start up and play a song as shown in this article ““Windows Media Player Alarm Clock using Task Scheduler“. As the author of the article states:

The only problem with this alarm clock is that if you are LOGGED OUT or the system is in standby prior to activating, it will run WMP as admin and you wont even see it open when you log in. To turn it off you will have run Task Manager and actually kill the process. A small price to pay for the reliability of a loud clock ….

What we need is a programmatic solution, and fortunately Windows now has a decent scripting language, PowerShell, with access to the Windows features. Thus, we use a script (from “Weekend Scripter: A Cool Music Explorer Script“):

Add-Type -AssemblyName presentationCore

$mediaPlayer = New-Object system.windows.media.mediaplayer
$path = "C:\Users\Public\Music\jazz\Oregon\1000 Kilometers"
$files = Get-ChildItem -path $path -include *.mp3,*.wma -recurse

foreach($file in $files)
{
 "Playing $($file.BaseName)"
  $mediaPlayer.open([uri]"$($file.fullname)")
  $mediaPlayer.Play()
  Start-Sleep -Seconds 10
  $mediaPlayer.Stop()
} 

And save it as a ps1 type file, for example: c:\batch\PlayMedia.ps1
In the above script, change the file so that ‘$path’ points to your own music folder. Note that the script plays each mp3 it finds for only ten seconds. How would you change it to just play the whole work?

Next you’ll open up the Task Scheduler and create a new basic task. The Actions tab is where you’ll edit how to run the task. So, enter into the dialog box:

Action:  Start a program
Program/script:  powershell
Add arguments (optional): -command "& 'C:\batch\PlayMedia.ps1' " 

Of course, set the days and times, and you can even select that the system will wake the computer to run the task.

I learned how to run PowerShell in scheduler at “How to schedule a Windows Powershell script“.

Advantages?
Well, now that you can program the system to act like a musical alarm clock, you can tweak it to use playlists, radio, etc. Soon you’ll be getting it to wake you up with “A Night On Bald Mountain” shaking the house, and while you are running around the bedroom screaming from fright, the PC will calmly start a batch of whole wheat pancakes and pouring on the real maple syrup; slowly the aroma from the coffee it started brewing wll start to permeate your consciousness , and no, that giant Chernabog is not coming for you.

What is PowerShell?
Surprisingly, many technical people have never heard of PowerShell. This is Microsoft’s innovation in their management infrastructure. Before, the “command shell” and associated batch language were really decades old DOS-like behavior. That shell could not compare to Unix/Linux shells like bash. PowerShell is an object-based scripting language and shell. Afaik, it is unique in that instead of using only text manipulation, as in most unix pipe examples, it pipes objects.

System

  • Windows 7 64bit Professional
  • Windows Powershell 2.0

Further Reading


Can’t save a new TiddlyWiki in IE

October 7, 2010

I wrote before about saving issues in Chrome Browser, I now have a similar problem. I downloaded a new TiddlyWiki version and when I try to save a change in it, I get problems in IE browser. I don’t have administrator access to this system, so maybe this is an issue?

Here is the fix which someone mentioned in the TiddlyWiki forum. Just copy the file and save to a new file name. This won’t work with just a file level copy. You have to copy the internals themselves: open the new file in a text editor, I used GVim, then copy the contents into a new editor file and save to a new file. When you open this new file you will get the usual first time IE security and activeX warnings. After that, the TiddlyWiki file is now a “normal” instance.

BTW, I recommend the SinglePageModePlugin. It allows the setting the way TiddlyWiki opens Tiddlers, less confusing, especially for casual TiddlyWiki use.

What the heck is TiddlyWiki? See “TiddlyWiki for the rest of us” for an end-user guide.

Links

  1. Duplicate: Can’t save a new TiddlyWiki in IE

Why a Repository for Java Dev?

August 4, 2010

Excellent article on why a Repository Manager is crucial for software development process.  Makes the case that not using a Repository Manager is the cause of many anti-patterns. It could be that the Repository is the next essential besides the VCS in development best practices.

The article is using Maven as a case in point and also it is selling a product (nothing wrong with that) so perhaps one could be a little wary. However there are other dependency managers like Ivy which are used by other Build systems like Gradle available.

I have seen places that do not use a Software Configuration Managment (SCM) Version Control System (VCS).  And, then there are places that use a VCS incorrectly; as this article points out the VCS becomes an ad hoc file store for everything.   I remember one place where our partner gave us access to their SCM to download a project source, and we got everything!  They had application executables, utilities, documents, binaries and other things like their Office apps and other tool chains, which had nothing to do with the project we wanted the source of.  Someone must have accidentally  imported their whole PC into CVS, yikes.

Enter the Repository which, I believe, first became “popular” with the introduction of Maven.  When I tried to introduce use of an internal Repository into a former company I got push back:  “It’s very easy to just put one’s jars and dependent binaries into version control” or  “Who needs that Repository play toy stuff!”  Oh well. In that situation, it was probably the best decision, there is initial complexity in adopting any tool that aims to reduce complexity.

Is just using Mavan or Gradle with an internal Repository as proxy to external ones enough Repository Management (which Sonatype calls stage one: Proxying Remote Repositories) or does one have to use a full blown Repository Manager subsystem? Why does using the internal Repository for one’s own output destination require a Repository Manager (which Sonatype calls ‘stage two’)? The Maven site has this to say:

Aside from the benefits of mediating access to remote repositories, a repository manager also provides something essential to full adoption of Maven. Unless you expect every member of your organization to download and build every single internal project, you will want to provide a mechanism for developers and departments to share both SNAPSHOT and releases for internal project artifacts. A Maven repository manager provides your organization with such a deployment target. Once you install a Maven repository manager, you can start using Maven to deploy snapshots and releases to a custom repository managed by the repository manager. Over time, this central deployment point for internal projects becomes the fabric for collaboration between different development teams. — Repository Management with Maven Repository Managers.

If you don’t think this is important you probably have not been on a project where disasters like xxx.jar was sent to a customer and we don’t know what version it is and who made it. You know, using version numbers as part of binary files would defeat the purpose of using a VCS no?

On a side note: Why did the Java development community develop its own repository system when there were plenty out there such as the application-level package systems used by the Linux community?

Links

Maven Repository Managers for the Enterprise

Why Do I Need a Repository Manager?: link

Maven Repository Manager Feature Matrix: link

Archiva

Artifactory

Nexus

Gradle:  http://www.gradle.org/

Ivy:  http://ant.apache.org/ivy/

Manage dependencies with Ivy

Maven:  http://maven.apache.org/

Ant:  http://ant.apache.org/

Continuous Integration:  http://en.wikipedia.org/wiki/Continuous_integration



How to auto redial busy line with iPhone?

July 13, 2010

Short answer is you can’t.

Background

At least I have not been able to find a procedure or free app that does this.   I found some web pages that say to use the Call button to dial the last call in the recent list.  The use of the Call button on the keypad gui didn’t work for me.  Still requires multiple key presses and did not bring up the last dialed number as stated.

Update: the Call button method will use the last dialed number, not the most recent number in the Recent list.  So, using Call is a viable approach.

Suggestion

So what to do?  The easiest way I found is to add the target number to your favorites list.  Now to rapidly redial just tap the contact in the Favorites list, then tap the Speaker icon.  Busy?  Just tap “End call”.  Repeat.

Questions

My question:  why doesn’t the iPhone have a real redial capability, other smart phones do?   Is it some kind of industry requirement or for reduction in negative possibilities?

Update
August 12, 2011: Anything changed about this feature or lack thereof? I am still using the first iPhone (yup, doesn’t even update anymore, and the fake GPS map doesn’t even work). Do the newer iPhone or Android phones have this feature built in?

Links

  1. Dupe on new site: How to auto redial busy line with iPhone?
  2. How to auto redial busy line with Android?

Ralph Towner: Drifting Petals

Available at Amazon


Groovy Object Notation (GrON) for Data Interchange

May 17, 2010

Summary

Foregoing the use of JSON as a data interchange when Groovy language applications must interact internally or with other Groovy applications would be, well, groovy.

Introduction

JavaScript Object Notation (JSON) is considered a language-independent data interchange format. However, since it is based on a subset of the JavaScript (ECMA-262 3rd Edition) language, this is not fully correct. In fact, many languages must marshal to and from JSON using external libraries or extensions. This complicates applications since they must rely on more subsystems and there may be a performance penalty to parse or generate an external object notation.

If an application must only interact within a specific language or environment, such as the Java Virtual Machine (JVM), perhaps using the host language’s data structures and syntax will be a simpler approach. Since Groovy (a compiled dynamic language) has built-in script evaluation capabilities, high-level builders (for Domain Specific Language (DSL) creation) , and meta-programming capabilities, it should be possible to parse, create, transmit, or store data structures using the native Groovy data interchange format (GDIF), i.e., based on the native Groovy data structures.

Syntax example

Below is an example JSON data payload.

JSON (JavaScript) syntax:

{"menu": {
  "id": "file",
  "value": "File",
  "popup": {
    "menuitem": [
      {"value": "New", "onclick": "CreateNewDoc()"},
      {"value": "Open", "onclick": "OpenDoc()"},
      {"value": "Close", "onclick": "CloseDoc()"}
    ]
  }
}}

Below is the same data payload; this time using Groovy syntax. Note that there are not too many differences, the most striking is that maps are created using brackets instead of braces. It looks simpler too.

Groovy syntax:

[menu: [
	id: "file",
	value: "File",
	popup: [
	menuitem : [
	 [ value: "New", onclick: "CreateNewDoc()" ],
	 [ value: "Open", onclick: "OpenDoc()" ],
	 [ value: "Close", onclick: "CloseDoc()" ]
     ]
  ]
]]

Code Example

/**
 * File: GrON.groovy
 * Example class to show use of Groovy data interchange format.
 * This is just to show use of Groovy data structure.
 * Actual use of "evaluate()" can introduce a security risk.
 * @sample
 * @author Josef Betancourt
 * @run    groovy GrON.groovy
 *
 * Code below is sample only and is on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
 * or implied.
 * =================================================
*/
 */
class GrON {
    static def message =
    '''[menu:[id:"file", value:"File",
     popup:[menuitem:[[value:"New", onclick:"CreateNewDoc()"],
     [value:"Open", onclick:"OpenDoc()"], [value:"Close",
     onclick:"CloseDoc()"]]]]]'''

    /** script entry point   */
    static main(args) {
       def gron = new GrON()
       // dynamically create object using a String.
       def payload = gron.slurp(this, message)

        // manually create the same POGO.
        def obj = [menu:
	    [  id: "file",
	       value: "File",
                 popup: [
                   menuitem : [
                   [ value: "New", onclick: "CreateNewDoc()" ],
                   [ value: "Open", onclick: "OpenDoc()" ],
                   [ value: "Close", onclick: "CloseDoc()" ]
          	]
           ]
         ]]

         // they should have the same String representation.
         assert(gron.burp(payload) == obj.toString())
    }

/**
 *
 * @param object context
 * @param data payload
 * @return data object
 */
def slurp(object, data){
	def code = "{->${data}}"  // a closure
	def received = new GroovyShell().evaluate(code)
	if(object){
		received.delegate=object
	}
	return received()
}

/**
 *
 * @param data the payload
 * @return data object
 */
def slurp(data){
     def code = "{->${data}}"
     def received = new GroovyShell().evaluate(code)
     return received()
}

/**
 * @param an object
 * @return it's string rep
 */
def burp(data){
     return data ? data.toString() : ""
}

} // end class GrON

Possible IANA Considerations

MIME media type: application/gron.

Type name: application

Subtype name: gron

Encoding considerations: 8bit if UTF-8; binary if UTF-16 or UTF-32

Additional information:

Magic number(s): n/a

File extension: gron.

Macintosh file type code(s): TEXT

API

To be determined.

Security

Would GrON be a security hole? Yes, if it is implemented using a simple evaluation of the payload as if it were a script. The example shown above used evaluate() as an example of ingestion of a data object. For real world use, some kind of parser and generator for object graphs would be needed. The advantage would accrue if the underlying language parser could be reused for this.

Now this begs the question, if Groovy must now support a data parser, why not just use JSON with the existing libraries, like JSON-lib?

Is using the Java security system an alternative as one commenter mentioned?

Notes

The idea for GrON was formulated about a year ago. Delayed posting it since I wanted to create direct support for it. However, the task required more time and expertise then I have available at this time.

I was debating what to call it, if anything. Another name I considered was Groovy Data Interchange Format (GDIF), but I decided to follow the JSON name format by just changing the “J” to “G” and the “S” to “r” (emphasizing that Groovy is more then a Scripting language, its an extension of Java).

Updates

10Sept2011: See also this post: “JSON Configuration file format“.

9Feb2011: Looks like Groovy will get built in support for JSON: GEP 7 – JSON Support

I found (May 18, 2010, 11:53 PM) that I’m not the first to suggest this approach. See Groovy Interchange Format? by DeniseH.

Recently (Oct 3, 2010) found this blog post:
Groovy Object Notation ? GrON?

Mar 22, 2011: Groovy 1.8 will have JSON support built in.

Further Reading


Creative Commons License
Groovy Object Notation (GrON) for Data Interchange
by Josef Betancourt is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License.
Based on a work at wp.me.


Duplicate files tool? Free, but your credit card is …?

April 25, 2010

The web made sales games a lot easier, in particular the bait and switch.

I just resuscitated an old hard drive, bought an external enclosure for it.  Hooked it up.  Nice.  Now I want to clean it up.  Does it have stuff I may be missing from other drives?  Can’t tell, so many files and many duplicates.

So, first step is remove the duplicate files.  Do a web search for Win7 file duplication utilities.  What a mess.  Its hard to figure out what is a legitimate offering.   I opened a forum where the question is asked, what is a good duplicate file finder.  One response was that “XYZ Duplicate File Finder” (I changed the name of the actual tool) was easy and free.  Did the developer of the software post it?  Is it really free?

I visited the site.  xyz_site.  Well, its a ‘com’, site, but that can mean anything.  For example, they can sell their product, but give away crippled versions or older products for free, like winzip.com.   The thing is, nowhere on the site does it state the price.  If you dig into the site there is a page that indicates you have to pay for it:  the help-register-xxx.html   Yet, even here there is no price!  In fact, the license expires, but on the license FAQ page it says you only lose the ability to receive free updates.

I don’t mean to single out the makers of this software.  Perhaps I missed the price somewhere or misunderstood the site itself.  This type of site is very common in the Windows world.   I’m all for people making an honest buck, but the key here is honest.

Lets look at the winzip site.  Right on the first page, it gives the price.  That’s nice.  What I don’t like about it though is that there is a download button.  What, I can download a trial or free version?  Nope, click the button and way down at the bottom of the download page that results they tell you its for registered users only.  And which you probably won’t see unless you scroll the window.  Huh?  Why not call the “Download” button the “Upgrade” button instead?

I don’t think you see this kind of stuff in the Linux or Unix world.  Probably because people in the *nix world want every useful utility is free or eventually built into the OS itself (OpenSolaris has dedupe via ZFS)?

I guess in the Windows world free means, not to the user, but to the sellers who are free to do whatever they want.

Now how do I remove duplicate files?  Maybe I can just write a script to do it.  I could create a database and then query for duplicates, that must be easy in SQL?  I wonder how its done in Linux, probably a one-line Perl script.

By the way, I had some luck with Duplicate File Finder 0.8.0 by Matthias Boehm.  Also, in the Linux world this can be done with very clever bash scripting.  Here is a sample scripting approach.

What I don’t like of the tools I’ve seen so far is that they are very bad in terms of usability.  They should look at how diff and merge tools do things, like KDiff3.  Another example to look at is the graphics approach found in D-Dupe.


How to do Ad Hoc Version Control

March 28, 2010

By Josef Betancourt,  Created 12/27/09    Posted 28 Mar 2010

Scenario

You will modify a group of files to accomplish something.  For example, you’re trying to get a server or network operational again after some security requirements.   During this process you also need to keep track of changes so that you can back out of dead ends or combine different approaches.  Or you just simply need to temporarily keep revisions of a group of files for a task based project.

Keywords

Version Control, Revision Control (RCS), Configuration Management (CM), Version Control System (VCS), Distributed Version Control System (DVCS), Mercurial, Git, Agile, Subversion

Lightweight Solution

An lightweight Ad-Hoc Version Control (AHVC) approach may be desirable.  Note that even when there are other solutions in place, a lightweight approach may still be desirable.  What are the requirements of a lightweight and workable solution?

  • Automated:  Thru human error a file or setting may not get versioned or even lost.  Thus, all changes must be tracked.
  • Small:  A large sprawling system that could not even fit on a thumb drive is too big.
  • Multiplatform:  It should be able to run on the major operating systems.
  • Non-intrusive:   Use of this system should not change the target systems in any way.  Ideally should run from a thumb drive or CD.  And, if there is a change, backing it out should be foolproof.
  • Simple:  Anything that requires training or complexity will not be used or adopted.  This reduces collaborative adoption and improvements in tools and process.
  • Fast:   Should be fast and optimized for local use.
  • Distributed:   Since issues can span “boxes”, it should be able to work with network resources.  This reduces the applicability of GUI based solution.
  • Scripting:  Should be easy to optimize patterns of use by creating high-level scripts.
  • Small load:  Of course, we don’t want to grab excessive CPU and memory resources from the target system.
  • Non-Admin:  Even in support situations, full admin access may not be available.
  • Transactional:  Especially in server configuration, changes should be consistent.  Failure to save or revert even one file could be disastrous.
  • Agile:  Not about tools but results.

DVCS

At home when I create a folder to work on some files, like documents or programming projects, I will usually create a version control system right in the folder.  This has saved me a few times.  I also tried to do this at a prior professional assignment and was partially successful (will be discussed later).

I used a Distributed Version Control System (DVCS).  Since it does not require a centralized server or complicated setup to use, a DVCS meets most of the lightweight requirements.  Though, a VCS is usually used for collaborative management of changing source files it may be ideal here.  One popular use case is managing one’s /etc folder in Linux with a VCS.

Mercurial

A good example of a DVCS is:

Mercurial:

“(n) a fast, lightweight Source Control Management system designed for efficient handling of very large distributed projects.” – http://mercurial.selenic.com/

Note:  I use Mercurial as the suggested system simply because I started with it.  Git and others are just as applicable.

To create a repository in a folder, one simply executes three commands init, add, and commit.  The “init” creates a subfolder that serves as the folder history or true repository.  The “add” is recursive, adding all the files to version control, and the “commit”, makes these changes permanent.  Of course, one can ‘add’ a subset of files and create directives for files to skip and so forth.

Before:

\GIZMO
+—client
\—server

In a command shell

cd \GIZMO
hg init
hg add
hg commit -m “initial commit of project”

After:

\GIZMO
+—.hg
+—client
\—server

The terminology may be a little confusing.  What happened is that now the GIZMO folder has a Mercurial repository which consists of a new .hg folder, and the other existing files and folders comprise the working directory (see Mercurial docs for a more accurate description).  There are no other changes!

That’s all it takes to create a repository.  No puzzling about storage, unique names, hierarchy, and all the details that goes with central servers.  The Mercurial docs show how to do the other required tasks, like go back to a previous changeset or retrieve file versions and so forth.  Here is how to view the list of files in a particular changeset:

c:\Users\jbetancourt\…cts\adhocVersioning>hg -v log -r 0

   changeset:   0:f29a0b0ad03c    user:        Josef Betancourt <josef.betancourt>l    date:        Sat Jun 21 10:53:11 2008 -0400    files:       AdhocVersioning.doc   description: first commit

And, here is a log output using the optional graph log extension (http://mercurial.selenic.com/wiki/GraphlogExtension)

c:\Users\jbetancourt\...adhocVersioning>hg glog -l 2
@  changeset:   9:25f4c55e4860
|  tag:         tip
|  user:        Josef <josef.betancourt>
|  date:        Fri Mar 26 22:43:56 2010 -0400
|  summary:     removed repo.bat
|
o  changeset:   8:43a33533c992
|  user:        Josef <josef.betancourt>
|  date:        Thu Mar 25 22:08:35 2010 -0400
|  summary:     removed old files
|

For the lone individual using ad hoc versioning a sample workflow is give at Learning Mercurial in Workflows.

Ad Hoc Sharing

A DVCS, true to its name, shines in how it allows Distributed versioning sharing of these local repositories.  Thus, when a team is working on a technical issue (ad hoc) it is very easy to share each others work. Mercurial includes an embedded web server that can be used for this.

Mercurial’s hg serve command is wonderfully suited to small, tight-knit, and fast-paced group environments.  It also provides a great way to get a feel for using Mercurial commands over a network.

This is illustrated with the coffee shop scenario, see manual.

A sprint or a hacking session in a coffee shop are the perfect places to use the hg serve command, since hg serve does not require any fancy server infrastructure … Then simply tell the person next to you that you’re running a server, send the URL to them in an instant message, and you immediately have a quick-turnaround way to work together. They can type your URL into their web browser and quickly review your changes; or they can pull a bugfix from you and verify it; or they can clone a branch containing a new feature and try it out.

Of course, this would not scale and is for “on-site” use between task focused group members.

A great workflow image by Leon Bambridge for team sharing.

Another simple scenario is taking a few file documents from one location to another with a flash drive (in lieu of using a Cloud storage service). Instead of doing a copy or cp one can simply create a DVCS repository at the work directory, then clone it on the flash drive. Then at home one pulls to the DVCS repository at home. When finished editing the files, one then pushes to the flash repo, and does the reverse process at the work site. Not only are you not missing any files, you are also keeping track of prior versions. Note, for security reasons, not everyone has unfettered web access or should they.

Revisiting the flash drive scenario above; if you plan to use a flash drive for transport multiple times and the group of files are large, using the “bundle/unbundle” hg commands are a good tool, see Communicating Changes on the Mercurial site.

Security
Every connection must be secure and every file must be encrypted, especially if on flash drives. The security policies of the employer come first. Even if only for your own personal ad-hoc use, you should be careful with exposing your data.

Advantages

  • Easy to use.The commands needed to perform normal tracking of changes are few and simple.  The conceptual model is also simple, especially if one is not fixated on use of centralized Version Control System.
  • Some file changes may be dependent on or result in other file changes.In a DVCS, commits or check-ins create a “changeset” in the local repository.  This naturally keeps track of related changes.
  • You may need to work on different operating systems.Mercurial runs on many systems including Windows.
  • You don’t want to change the existing system, low intrusion.  Mercurial can be deployed to a single folder, and the repositories it creates do not pollute the target folders.  For example, in the Subversion VCS, “.svn” folders are created in each subfolder in the target.  Not a drawback but complicates things down the line, such as when using file utilities and filters.

Issues

Unfortunately, the use of a DVCS is not perfect and has its own complexities.  For Mercurial, in the context of this post, these are handling binary files, versioning non-nested folders, and probably for any VCS is the semantic gap between the project task based view and the versioning mindset.

1. Binary Files

Mercurial is really for tracking non-binary files.  That way the advantages of versioning are realized.  Diffs and merges are not normally applied to Binary files. Further the size of binary files impact performance and storage when they reach a certain size.  Yet, for ad hoc use, binary files will have to be easily tracked.  Binary files could be images, libraries, jars, zips, documents, or data.

Large binaries are a problem with all VCS systems.  One author discussed a technique to allow Git to handle them in lieu of his continued use of Unison.  He said use Git’s “–shared” option:  git clone –shared /mnt/fileserver/stuff.git stuff

Note that Mercurial extensions exist to handle binary files.  One of these is the BigFiles extension.  In essence, BigFiles and other similar approaches, handle large binaries using versioned references to the actual binaries which are stored elsewhere.

Update Oct 29, 2011: Looks like Mercurial 2.0 will have a built-in extension for handling binary files, LargeFiles extension.

Another issue is that since binary files may not be diffed within the dvcs tool set.  In a DVCS one can set an external merge agent.   If one is not available, using the app that created the binary diff and merge is cumbersome.    For example, a Word doc is binary (even though internally it could be all XML) in effect.   Thus, a diff would not reveal a usable view.  One must ‘checkout’ particular revisions and then use Word to do a diff or just manually eyeballing it.  Same thing with a zip, jar, image, etc.

Update 02-02-2012: Some tools allow direct use of external tools to diff “binary” files. I think TortoiseSVN does this, allowing Microsoft Word, for example, to diff.

2. Non-nested target folders.

A scenario may involve the manipulation of folders that are not nested. For example, a business system employs two servers and changes must be made to both for a certain service to work, further, physically moving these folders or creating links is not possible or allowed. Mercurial, at this time, works on a single folder tree, and AFAIK there is no way to use symlinks or junctions to create a network folder graph, at least with my testing.  The ForestExtension or subrepositories experimental feature in Mercurial 1.3 do not qualify since they only enable the handling of a folder tree as multiple repositories.

Sure each folder tree in the graph can be managed, but if a particular change effects files in each tree, there is no easy way to transactionally version them into one changeset, though there are ways to share history between repositories (as in the ShareExtension).

A possible solution is to allow the use of indirect folders.  In Mercurial, work files and the actual repository, the .hg folder, are colocated.  Instead the repository can point to the target folders (containing the work files) to be versioned.  In this way multiple non-nested folders can be managed.  Note that this is not a retreat to the centralized VCS since the repository is still local and distributed using DVCS operations.   Below, the user has created a new Mercurial repository in folder “project”.  This creates the actual repo subdirectory “.hg”, and the indirect actual folders to be versioned are pointed to in a “repos” directive file or using actual symlinks.

project

\.hg

repos ——> src_folder1

\—–> src_folder2

\—–> src_folderN

Whether this is useful, possible, or already planned for is unknown.

I mentioned this “limitation” on the Mercurial mailing list and was told that this is not a use case for a DVCS. There are many good reasons why all (?) VCS are focused on the single folder tree.

Update, 2011-08-31 T 11:37
Just learned that Git does have an interesting capability?

It is also possible to have a working tree where .git is a plain ASCII file containing gitdir: , i.e. the path to the real git repository

Though this doesn’t fulfill the non-nested project folders scenario, it does help Git be more applicable in an ad-hoc solution. For example, the repo could be located in a different storage location when the target folder is in a constrained unit.

3. Non-admin install

Updated 25 Aug 2010: In the requirements, non-admin install of the VCS was mentioned. This is where Mercurial fails, somewhat. The default install using the binary, at least on Windows, requires admin privileges. I got around this by first installing on another Windows system, then copying the install target folder to the PC I need to work on. This worked even when I installed on a Windows 7 Pro, and then copied to a Windows XP Pro. No problems yet. The Fossil DVCS does not have this problem.

4. Ignore Files

This is, perhaps, a minor issue. Mercurial, as most VCS do, allow one to filter the files that are versioned in the repo.
In Mercurial one creates an .hgignore file and within it, one can use glob or regular expression syntax to specify the ignore files. Well, this can be tricky. See newsgroup discussion that was started by this post. IMHO, having another syntax declaration that allows specification of directories and files explicitly is the way to go. How do other systems do this? Ant patternsets seem to be pretty usable.

5. Semantic Gap

There is a semantic gap when working on a maintenance problem and switching to the versioning viewpoint.   When versioning systems are routinely used, as in Software Development, this is not an issue, just part of the Software Development Lifecycle or best practice (amazing that some shops don’t use version control).   But, when one uses VC only occasionally as a last resort it’s another story.  QA, Support, and Project Managers, may not be comfortable with repositories, branches, tags, labels, pull, push, and so forth.

When I first tried to use Mercurial for Ad hoc service professionally it quickly lost some of its advantages as the task (fixing a system) reached panic levels (usually the case with customer support and deployment schedules) and simply creating and looking at commit messages failed to follow the workflow.  Manually tracking which tag or branch related to which event of system testing was cumbersome.  Further use would have eventually revealed the patterns of use that would have worked better, but that was a onetime experiment.

A partial solution, other than just getting more expert with the DVCS and better work patterns, is to implement a higher level Domain Specific Language (DSL) that hides the low level DVCS command line and repository centric view.  This could even have a GUI counterpart.  This is not the same as a GUI interface to the DVCS such as TortoiseHg or the Eclipse HG plugin.  What should that DSL  be and is it even possible or useful?

work flow Updates

June 26, 2011: git-flow, is an example of providing high-level operations to enable a specific work flow or model. Perhaps such an approach would be applicable in this AHVC requirements.

Sept 17, 2011: Mercurial Flow-Extension
implements the git-flow branching model.

Alternatives

Naive Approach

The usual approach is to just make copies of the effected folder or files that you will be changing.  You can use suffixes to distinguish them, such as gizmo.1.conf.   It’s very common to see (even in production!) files or folders with people’s initials, gizmo.jb.19.conf.

This gets out of hand very quickly, especially if you are multitasking or working as part of a team and may forget after a good lunch what file “gizmo.24.conf” solved.   This problem is compounded when you need to change multiple files, so for example, gizmo.jb.19.conf may depend on changes to “widget.22.conf”.   This also gets very chaotic when the files to change and track are in different folder trees or even storage system.  Most importantly this will not withstand the throw clay at the wall and see what sticks school of real world maintenance.

One method I’ve seen and used myself is to just clone each folder tree to be modified.  This gives an easy way to back out any changes.  This, alas, is also error prone, resource intensive, and may not be possible on large file sets or very constrained systems.

Client-Server VCS

A Traditional client-server VCS like Subversion can, of course, be used for Ad Hoc Versioning.   With Subversion one can run svnserve, its lightweight server, in daemon mode.  Then create a task based repository:

svnadmin create /var/svn/adHoc

And, import your tree of files:

svn import c:\Users\jbetancourt\Documents\projects\adhocVersioning file:///var/svn/adHoc/project

Plus, Subversion supports offline use.  I think.  Have not used Subversion in a while.

Another effective Subversion feature is the use of local repositories using the “file:// protocol”.

Management consoles

Many systems are managed by various forms of management consoles, graphical user interfaces.  These are client or web based and may be part of the system or a third-party solution.  This is a big advantage from a hands-on administrative point of view.  However, from an automation and scripting viewpoint this is not optimal.  Thus, there is hopefully a API or tool based method of accessing the actual configuration files or databases of the system.  So in this sense, these systems are within the scope of this discussion.

This is not always the case.  One application server comes to mind that was (is?) so complex that there was no way to script it.  Thus, no way to automate the build and release process and versioning of the system.  Consequently, there was also no way to automate the various QA tests that were always panic driven and manually applied.

Managed Approach

The correct or more tractable method is to use a managed approach.  This is a software configuration and distribution system that is usually under the control of the IT staff, for example Microsoft’s System management Server (SMS) or System Center Essentials (SCE) for SMB.  Non-Microsoft solutions are of course available, such as those from IBM Tivoli’s product lineup.

Why is this not always the best approach?  There may be situations where a subset of a managed resource must be modified.  For example, you are a field service engineer and must travel or remotely connect to a client’s system to handle a service call.   This process may also entail making changes to other hosted apps and system configurations, such as network configurations.  Trying to get the IT department to collaborate or change the configuration or schedule of the managed solution may not be possible or timely.  In fact, this would be discouraged (rightly so) since it can wreak havoc on a production system.  Thus, even changing some resource may entail admin of multiple systems, not just a push of a few files and run of some batch files.  It could require interactive set and test.  Picture the Mars Rovers and how the OS reset problem was finally solved.

Closely related to the managed approach is to use a centralized version control system (VCS) or backup support.  Fortunately many operating systems have versioning capabilities built in or readily available.  For example, in the Windows platform one can make System Restore points or use the supported backup subsystems (as found in Windows 7 Professional).  Many *nix’s also have built-in versioning support in the form of installable Version Control Systems or differential backup.  In high-end virtualized systems there are facilities for backup or making snapshots and even transport of live systems.

While these work, there is a certain amount of complexity involved. Also there are issues using the same approach on multiple operating systems.  Another important drawback is that one cannot always modify the target system and, for example, install a VCS, even temporarily.  The common factor in these approaches is that there is a central “server” and associated repository for revision storage.  This is fine when available but not very conducive to ad hoc use.

Versioning File System

A VFS could be of some use.  As far as I know there are no popular file systems that support versioning (as used here).  Digital Equipment’s VAX system had single file versioning and now openVMS.  Microsoft’s Windows was supposed to have this in the future winfiles, but is no longer in the plan(?), though Windows 7 and current servers can allow access to previous file versions as a feature of its   system protection feature.  Plan 9 has a snapshot feature. ZFS has advanced stuff too and I would not be surprised if one can set a folder to be ‘versioned’.

However, a VFS would not help in task based versioning since as discussed previously, there may be a need to change multiple subsystems and track these changes as “change sets”.  Thus, a VFS is not a Revision Control System.

Of course, using a scripted solution (discussed next) in conjunction with a file change notification system (inotify), one could cobble together a folder based VCS.  However, this is outside of our lightweight requirements.

Scripted Solution

Of course, it should be possible, especially in *nix systems, to use the utilities available and construct a tool chain for a lightweight versioning support.  The rich scripting and excellent tools like rsync make this doable.  Some languages such as Perl or Python are ideal for gluing this together.

Yet, this is not optimal since these same tools will not work on all operating systems or require duplication.  For various reasons, for example, it is not always possible to install cygwin on Windows and make use of the excellent *nix utilities and scripting approach.  Likewise, it is not possible to use the outstanding Windows PowerShell in Linux.  This is only a problem of course if we are referring to empowering staff to work seamlessly on different OS or resources.  Having the same tools and workflow are valuable.

Another thing about this alternative is that a custom solution will eventually become or have functions of a version control system like Git, so why not just start with one?

Snapshot

One approach possible by the use of the aforementioned scripted solution is to create a snapshot system.  The DVCS gives us fine grained control of file revisions.  But, do we really need to diff and find out that a command in one batch file used the ‘-R’ option or just get the file with the desired option.  We would know which file we want using task based snapshots.  Before a task is begun, we initiate a snapshot.  This is analogous to the OS type of restore points, except we do this for a specific target set.

NoSQL Database

Finally, there have been alternatives to the Relational Database Management System (DBMS) for many years.  Most recently, this is the NoSQL group of projects such as CouchDB.    CouchDB claims that it is: “Distributed, featuring robust, incremental replication with bi-directional conflict detection and management.”   Those features sound like something an ad hoc version control system should have.  Yet, CouchDB, all?, are document centric.  Still, worth pondering.

Conclusion

Presented were a few thoughts on an approach to ad hoc versioning.  A DVCS was proposed as a lightweight solution and some issues were mentioned.  Alternatives were looked at.  More research is required to evaluate proposal and determine best practices for the discussed scenarios.

Updates

7/15/10:  Changed “maintain” to “accomplish” in Scenario as per feedback from K. Grover.

7/23/10:  Forgot that I visited Ben Tsai’s blog where he discusses using Mercurial within an existing VCS such as Subversion, which I’ve also done, but not really the topic I discussed.

Further Reading

“HgInit: Ground up Mercurial”, http://hginit.com/01.html

Setting up for a Team

“Easy Automated Snapshot-Style Backups with Linux and Rsync” http://www.mikerubel.org/computers/rsync_snapshots/

Using Mercurial as ad-hoc local version control

“Intro to Distributed Version Control (Illustrated)”
http://betterexplained.com/articles/intro-to-distributed-version-control-illustrated/

Version Control, infrastructures.org
http://www.infrastructures.org/bootstrap/version.shtml

“The Risks of Distributed Version Control”,
http://blog.red-bean.com/sussman/?p=20

Subversion Re-education

“Subverting your homedir, or keeping your life in svn”
http://kitenet.net/~joey/svnhome/ (He now uses Git)http://microseeds.com/blog/?p=95

Home directory version control notification
http://kristian-domagala.blogspot.com/2008/10/home-direcotry-version-control.html

“Managing your web site with Mercurial”, Tim Post, http://echoreply.us/tuto/mercurial_site_management.html

SingleDeveloperMultipleComputers
http://mercurial.selenic.com/wiki/SingleDeveloperMultipleComputers

Mercurial by example
http://www.jemander.se/MercurialByExample.pdf

Mercurial (hg) with Dropbox
http://www.dzone.com/links/r/mercurial_hg_with_dropbox.html

Mercurial for Git users
http://mercurial.selenic.com/wiki/GitConcepts

Versioning File System
http://en.wikipedia.org/wiki/Versioning_file_system#Linux

Agile Operations in the Enterprise
Michael Nygard, http://www.infoq.com/articles/agile-operations

git-sync
http://code.google.com/p/git-sync/

git-flow
https://github.com/nvie/gitflow

Microsoft System Center Essentials
http://www.microsoft.com/systemcenter/essentials/en/us/default.aspx

A utility that keeps track of changes to the etc configuration folder:
http://kitenet.net/~joey/code/etckeeper/

Version Control for Multiple Agile Teams
http://www.infoq.com/articles/agile-version-control#q22

DVCS
http://en.wikipedia.org/wiki/Distributed_Version_Control_System

DVCS vs Subversion smackdown, round 3

“Using Mercurial as ad-hoc local version control”; Tsai, Ben;
http://bentsai.wordpress.com/2008/05/30/using-mercurial-as-ad-hoc-local-version-control/#comment-24

Tracking /etc etc

Subversion
http://subversion.apache.org/

For a more detailed exposition, see the mecurial tutorial:
http://www.serpentine.com/mercurial/index.cgi?Tutorial

The Hg manpage is available at:  http://www.selenic.com/mercurial/hg.1.html

There’s also a very useful FAQ that explains the terminology:
http://www.selenic.com/mercurial/FAQ.html

There’s also a good README:  http://www.selenic.com/mercurial/README

HG behind the scenes:
http://hgbook.red-bean.com/read/behind-the-scenes.html

Mercurial
http://en.wikipedia.org/wiki/Mercurial%28software%29

Mercurial Basic workflows
http://mercurial.selenic.com/guide/#basic_workflow

Mercurial BigFiles Extension
http://mercurial.selenic.com/wiki/BigfilesExtension

Mercurial LargeFiles Extension
LargeFiles

Mercurial Subrepos: A past example revisited with a new technique
http://playcontrol.net/ewing/jibberjabber/mercurial_subrepos_a_past_e.html

Mercurial(hg) Cheatsheet for Xen  http://xen.org/files/hg-cheatsheet.txt

A Guide to Branching in Mercurial
http://stevelosh.com/blog/2009/08/a-guide-to-branching-in-mercurial/

Subrepositories
http://mercurial.selenic.com/wiki/subrepos

Nested Repositories
http://mercurial.selenic.com/wiki/NestedRepositories

hgdeps
http://ratatanek.cz/hg/hgdeps/file/ab2935095cb9/deps.py

Tracking 3rd-party sources
http://www.selenic.com/pipermail/mercurial/2007-April/013002.html

TortoiseHg
http://tortoisehg.bitbucket.org/

Git
http://en.wikipedia.org/wiki/Git_%28software%29

Git as an alternative to unison
http://kitenet.net/~joey/blog/entry/gitless/


Follow

Get every new post delivered to your Inbox.