<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Peineary Development]]></title><description><![CDATA[A Class Above Binary]]></description><link>https://peinearydevelopment.azurewebsites.net/</link><generator>Ghost 0.7</generator><lastBuildDate>Fri, 10 Apr 2026 20:47:57 GMT</lastBuildDate><atom:link href="https://peinearydevelopment.azurewebsites.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Debug Angular build with VSCode]]></title><description><![CDATA[<p>I don't know about you dear reader, but to me, almost all code that I haven't written is magical. What do I mean by that? A lot of times I can have thoughts about how a given result was achieved, so its not like the classic 'magicians never share their</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/debug-angular-build-with-vscode/</link><guid isPermaLink="false">8eb230b8-bbed-463e-98b7-fb29dfab846a</guid><category><![CDATA[Angular]]></category><category><![CDATA[VSCode]]></category><category><![CDATA[Debugging]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Thu, 30 Aug 2018 14:16:11 GMT</pubDate><content:encoded><![CDATA[<p>I don't know about you dear reader, but to me, almost all code that I haven't written is magical. What do I mean by that? A lot of times I can have thoughts about how a given result was achieved, so its not like the classic 'magicians never share their secrets' magic. Instead, I don't have to worry about the actual implementation, it "Just Works". That is, of course, until it doesn't.</p>

<p>This happened to me recently, trying to create a library for Angular.</p>

<hr>

<p>FWIW: The Backstory</p>

<p>I'm working on a shared Angular library for an organization. One of the components is using a CSS grid layout to achieve a given look. I created an application that I could use to test my components during development and alongside it, the library with the component. <a href="https://stackoverflow.com/questions/51600295/how-can-autoprefixer-work-in-angular-with-single-file-components">I noticed</a> that while running the application, if it was a multi-file component, I would get the browser prefixes I would expect(in this case <code>grid { display: grid; }</code> would transform to <code>grid { display: -ms-grid; display: grid; }</code>, but if it was a single file component, I would only get <code>grid { display: grid; }</code>. So as long as I created multi-file components, problem solved, right?</p>

<p>Wrong. :(</p>

<p>As it turns out, if I kept to the multi-file components, everything would work with my test application. When I built and published the library though, I was experiencing the same issue where my component wasn't laid out properly in IE because the additional <code>-ms-grid</code> property wasn't there. It turns out that the build process for a library takes the multi-file components and creates a single-file component out of it. <em>Please refer <a href="https://github.com/angular/angular-cli/issues/11480#issuecomment-403082982">here</a> if you want to know why this is the case.</em> This is fine and dandy, except it didn't seem to be applying the vendor prefixes when it did so.</p>

<hr>

<p>I attempted to ask a <a href="https://stackoverflow.com/questions/51968945/how-do-i-need-to-configure-ng-packagr-to-apply-autoprefixer">question</a> on Stack Overflow and made some comments on <a href="https://github.com/angular/angular-cli/issues/11480">an issue</a> in the Angular cli GitHub repo to no avail. I was getting so frustrated, that I decided I would try to debug through the build process to see if I could determine the source of the error. I love VSCode and have been developing in it for a while, but haven't ever really tried to debug my code too much through it. I've done that in Visual Studio for c# code and in the browser for front-end code(and a lot of help from sourcemaps ;)).</p>

<p>This was a new experience for me and was a lot more difficult than I was expecting. Things make a bit of sense now, but I thought I would share the learning process to remember the solution and hopefully aid others in not stumbling as much as I did when trying to achieve the same result.</p>

<p>Going to the debug icon, it offered for me to create a new configuration as I didn't currently have any in my project. The most logical choice for me was to debug an NPM task as that is how I normally run my builds. Choosing this option produced JSON looking like this: <br>
<code>{
   "type": "node",
   "request": "launch",
   "name": "Launch via NPM",
   "runtimeExecutable": "npm",
   "runtimeArgs": [
     "run-script",
     "debug"
   ],
   "port": 9229
 }
</code></p>

<p>This seemed like a nice start, but running it gave me some weird error. I spent a bunch of time flailing at this step, trying to add other (what seemed like logical) properties to the config, all without change. Searching for 'vscode debug npm' led me to the <a href="https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_launch-configuration-support-for-npm-and-other-tools">official documentation</a> which didn't help much, mostly because I didn't read it so closely (even when I did later, it still seems to be missing a step). I went a bit further down and saw the auto detect debug stuff, which seemed cool, but also didn't work for me. At that point, I came across this <a href="https://stackoverflow.com/questions/34835082/how-to-debug-using-npm-run-scripts-from-vscode#comment-69464579">SO question</a>(and more importantly the comment).</p>

<p>I was finally getting different error messages when I tried to attach the debugger, which led me to believe I was getting closer. The script I was trying to run/debug originally looked like this: <code>ng build pd-ng</code>. After everything I had read and tried though, I managed to get it to this: <code>node --nolazy --inspect-brk=9229 node_modules/.bin/ng build pd-ng</code>. I kept getting weird errors and while the error changed if I used <code>ng</code> vs <code>ng.cmd</code> it wasn't really making a difference. Throughout this process, at various points, some of the errors led me to modify the launch script json to include <code>outFiles</code>(and a few others I can't remember) because it seemingly couldn't find js files.</p>

<p>At some point it dawned on me to look into the actual ng/ng.cmd files I was trying to launch which looked as follows: <br>
<code>@IF EXIST "%~dp0\node.exe" (
  "%~dp0\node.exe"  "%~dp0\..\@angular\cli\bin\ng" %*
) ELSE (
  @SETLOCAL
  @SET PATHEXT=%PATHEXT:;.JS;=;%
  node  "%~dp0\..\@angular\cli\bin\ng" %*
)</code></p>

<p>As can be seen, this is launching node, pointing it to execute one of Angular's scripts. Armed with that knowledge, I was able to get my debug script running. Here is the final result:</p>

<p><code>.vscode/launch.json</code></p>

<p><code>{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
      {
        "type": "node",
        "request": "launch",
        "name": "Launch via NPM",
        "cwd": "${workspaceFolder}",
        "runtimeExecutable": "npm",
        "windows": {
          "runtimeExecutable": "npm.cmd"
        },
        "runtimeArgs": [
          "run-script",
          "build:debug"
        ],
        "port": 9229
      }
    ]
}</code></p>

<p><code>package.json scripts</code></p>

<p><code>"build:debug": "node --nolazy --inspect-brk=9229 node_modules/@angular/cli/bin/ng build pd-ng"</code></p>

<p>After all of this work, I wish that I could tell you I found the source of the bug. Sadly, I haven't, but just being able to debug and learning a bit more about VSCode has been a great experience in its own right.</p>]]></content:encoded></item><item><title><![CDATA[Testing a View with Entity Framework Core and SQLite]]></title><description><![CDATA[<p>I encountered an interesting problem/solution the other day that I thought would be fun to write a short post about.</p>

<p>I'm working with an organization at the moment that utilizes Entity Framework Core as its ORM. This organization is just getting started with unit testing and decided to use</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/testing-a-view-with-entity-framework-core-and-sqlite/</link><guid isPermaLink="false">93b026da-fe2a-46c8-b1d2-3eb9f450f938</guid><category><![CDATA[entity-framework]]></category><category><![CDATA[sqlite]]></category><category><![CDATA[nunit]]></category><category><![CDATA[unit-testing]]></category><category><![CDATA[automated-testing]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Tue, 19 Jun 2018 16:37:44 GMT</pubDate><content:encoded><![CDATA[<p>I encountered an interesting problem/solution the other day that I thought would be fun to write a short post about.</p>

<p>I'm working with an organization at the moment that utilizes Entity Framework Core as its ORM. This organization is just getting started with unit testing and decided to use NUnit as their testing framework. The situation we ran into isn't NUnit specific, but I mention it as the code samples have NUnit specific attributes. The situation was as follows:</p>

<blockquote>
  <p>Based on information this organization has collected on its users, it auto-generates some aggregated information on a user's interests and populates a table with that information. This auto-generation process is owned by one team and exists in a database that isn't used solely by this team's application and therefore isn't accessed directly with Entity Framework.</p>
  
  <p>In an effort to enhance user experiences, they wanted to provide an interface for the users to view those auto-generated interests and remove them or add new ones. The approach that was taken for this problem was to utilize two tables and a view. One table(the one external to this project) contained the auto-generated interests. The other contained all of the user interests' change requests. The view then combined the two, utilizing the user's table to add to/mask from the auto-generated table. This allowed the organization to preserve the additional data it generated, related to those terms, in case the user wanted to add the term back or to help enhance the auto-generation mechanism's accuracy in the future.</p>
</blockquote>

<p>For instance:</p>

<p><strong>Auto-generated table</strong></p>

<p>| UserId | Interest    | Score |
| -----: | ----------- | ----: |
| 1      | restaurants | 5     |
| 1      | games       | 15    |</p>

<p><strong>User interests table</strong></p>

<p>| UserId | Interest    | Action |
| -----: | ----------- | ------ |
| 1      | restaurants | delete |
| 1      | nature      | add    |</p>

<p><strong>Combined interests view</strong>
| UserId | Interest    |
| -----: | ----------- |
| 1      | games       |
| 1      | nature      |</p>

<p>As this was an area where a bug was recently discovered, it seemed a ripe area to add unit tests. The official <a href="https://docs.microsoft.com/en-us/ef/core/providers/in-memory/">Entity Framework Core documentation</a> seems to favor utilizing SQLite slightly for testing, so the team decided to take that approach.</p>

<p>The <code>DbContext</code> for the types used looked as follows:</p>

<pre><code>namespace PeinearyDevelopment.BusinessComponents.DataAccess
{
    public class UserDbContext : DbContext
    {
        internal DbSet&lt;UserInterestEditDto&gt; UserInterestEdits { get; set; }

        public DbSet&lt;UserInterestDto&gt; UserInterests
        {
            get { return Set&lt;UserInterestDto&gt;(); }
        }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            modelBuilder.Entity&lt;UserInterestEditDto&gt;(entity =&gt;
            {
                entity.ToTable("UserInterestEdits", "dbo");
            });

            modelBuilder.Entity&lt;UserInterestDto&gt;(entity =&gt;
            {
                entity.ToTable("V_UserInterests", "dto");
                entity.HasKey(interest =&gt; new { interest.UserId, interest.Interest });
            });
        }
    }
}
</code></pre>

<p>As can be seen, the <code>UserInterestEdits</code> aren't ever exposed directly, so that <code>DbSet</code> is made to be <code>internal</code> and since the view: <code>UserInterests</code> is <code>readonly</code>, it only implements the <code>get</code> method for it's <code>DbSet</code>. Also of note, while slightly confusing, the <code>DbSet</code> is linked to its view with the <code>ToTable</code> method as a view is treated like a virtual table.</p>

<p>Normally it is considered best practice to write a failing test first and then update the code to make the test succeed. For the brevity of the article and as the bug isn't really relevant to this article, I'm writing things a bit out of order. Once our bugfix was in the place, the code under test is as follows:</p>

<pre><code>namespace PeinearyDevelopment.BusinessComponents.DataAccess
{
    public enum Action
    {
        Add = 0,
        Delete = 1
    }

    public class UsersDal : IUserModifierDal
    {
        private UserDbContext DbContext { get; }

        public UsersDal(UserDbContext dbContext)
        {
            DbContext = dbContext;
        }

        public async Task MergeInterests(int userId, string[] interests, Action action)
        {
            var hasChanges = false;
            foreach (var interest in interests)
            {
                var dbInterest = DbContext.UserInterestEdits.FirstOrDefault(edit =&gt; edit.UserId == userId &amp;&amp; edit.Interest == interest);

                if ((dbInterest != null &amp;&amp; action == Action.Delete) || (action == Action.Add &amp;&amp; dbInterest.Action == Action.Delete))
                {
                    DbContext.UserInterestEdits.Remove(dbInterest);
                    hasChanges = true;
                }
                else if (dbInterest == null)
                {
                    var newUserInterestEdits = DbContext.Set&lt;UserInterestEditDto&gt;()
                                                        .Add(new UserInterestEditDto
                                                             {
                                                                Action = action,
                                                                UserId = userId,
                                                                Interest = interest
                                                             });
                    hasChanges = true;
                }
            }

            if (hasChanges)
            {
                await DbContext.SaveChangesAsync().ConfigureAwait(false);
            }
        }

        public Task&lt;UserInterestDto[]&gt; GetUserInterests(int userId)
        {
            return DbContext.UserInterests
                            .AsNoTracking()
                            .Where(e =&gt; e.UserId == userId)
                            .ToArrayAsync();
        }
    }
}
</code></pre>

<p>Since we are using dependency injection to provide instances of interface implementations to our classes through their constructors, we need to setup the DI container for our tests as well. That looks as follows:</p>

<pre><code>namespace PeinearyDevelopment.BusinessComponents.DataAccess.UnitTests
{
    [SetUpFixture]
    public class Setup
    {
        private static ServiceProvider _serviceProvider;

        public static ServiceProvider ServiceProvider
        {
            get
            {
                if (_serviceProvider == null)
                {
                    var configurationContainer = new ServiceCollection();

                    configurationContainer.AddDbContext&lt;UserDbContext&gt;(options =&gt; options.UseSqlite("DataSource=:memory:"));
                    configurationContainer.AddScoped&lt;IUserModifierDal, DataAccess.UsersDal&gt;();

                    _serviceProvider = configurationContainer.BuildServiceProvider();
                }

                return _serviceProvider;
            }
        }
    }
}
</code></pre>

<hr>

<blockquote>
  <p><strong>NOTE</strong>: <a href="https://docs.microsoft.com/en-us/ef/core/providers/sqlite/limitations">Entity framework's documentation</a> seems to still state that schema support is a limitation for the SQLite provider. While I have experienced this in the past and had to use a workaround, currently, the SQLite provider seems to handle schemas gracefully and I didn't find the need for working around this limitation</p>
  
  <p><strong>WORKAROUND</strong>:
      public UserDbContext(DbContextOptions<userdbcontext> options) : base(options)
      {
          IsTestRun = options.FindExtension<sqliteoptionsextension>() != null;
      }</sqliteoptionsextension></userdbcontext></p>
</blockquote>

<pre><code>protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity&lt;UserInterestDto&gt;(entity =&gt;
    {
        if (IsTestRun)
        {
            entity.ToTable("dto_V_UserInterests");
        }
        else
        {
            entity.ToTable("V_UserInterests", "dto");
        }
    }
}
</code></pre>

<hr>

<p>Now for the tests. As the first few were going to be reading/writing to the actual table on our context, we anticipated they would be easy.</p>

<pre><code>namespace PeinearyDevelopment.BusinessComponents.DataAccess.UnitTests.UsersDal
{
    [TestFixture]
    public class MergeInterests
    {
        private IUserModifierDal UsersDal { get; }

        public MergeInterests()
        {
            var serviceProvider = Startup.GetServiceProvider();
            UsersDal = serviceProvider.GetService&lt;IUserModifierDal&gt;();
        }

        [Test]
        [Description(@"Given:
                        A user
                       When:
                        A new user provided interest is added
                       Then:
                        The interest should be returned for that user")]
        public async Task AddUserProvidedInterestToUser()
        {
            // arrange
            const int userId = 1;
            const string userProvidedInterest = "nature";

            // act
            await UsersDal.MergeInterests(userId, new[] { userProvidedInterest }, Action.Add).ConfigureAwait(false);

            // assert
            var interests = await UsersDal.GetUserInterests(userId).ConfigureAwait(false);

            Assert.IsTrue(interests.Any(interest =&gt; string.Equals(interest.Interest, userProvidedInterest, System.StringComparison.Ordinal)));
        }

        [Test]
        [Description(@"Given:
                        A user with a user provided interest
                       When:
                        The user provided interest is deleted
                       Then:
                        The interest should not be returned for that user")]
        public async Task RemoveUserProvidedInterestFromUser()
        {
            // arrange
            const int userId = 1;
            const string userProvidedInterest = "nature";
            await UsersDal.MergeInterests(userId, new[] { userProvidedInterest }, Action.Add).ConfigureAwait(false);

            // act
            await UsersDal.MergeInterests(userId, new[] { userProvidedInterest }, Action.Delete).ConfigureAwait(false);

            // assert
            var interests = await UsersDal.GetUserInterests(userId).ConfigureAwait(false);

            Assert.IsFalse(interests.Any(interest =&gt; string.Equals(interest.Interest, userProvidedInterest, System.StringComparison.Ordinal)));
        }
    }
}
</code></pre>

<p>Those seemed reasonable, so we decided to run the tests...and they both failed. We were scratching our heads for a few moments as to why because we stepped through the code and saw that the <code>UserInterestEdits</code> table was getting updated as expected, but our assertions were still failing.</p>

<p>The reason was quite simple actually, but it took us a minute to figure out what was going on. Entity Framework was creating both DbSets for us, so in stepping through the code all seemed well. The obvious problem though is that while in SQL land our <code>UserInterests</code> are coming from a view, in SQLite land, there was no view. Entity Framework created a table for <code>UserInterests</code> to execute queries against, but it wasn't reflecting the additions to the <code>UserInterestEdits</code> table. Once we realized that, it took a couple of iterations, but we came up with the following addition to our <code>Setup</code> class:</p>

<pre><code>[OneTimeSetUp]
public async Task InitializeDatabase()
{
    var context = ServiceProvider.GetService&lt;UserDbContext&gt;();
    await context.Database.OpenConnectionAsync().ConfigureAwait(false);
    context.Database.ExecuteSqlCommand(createView);
    context.SaveChanges();
}

/*
    * Entity Framework sets up tables for all of its entities in SQLite
    * This entity set is pulling from a View.
    * In order to emulate that, we need to DROP the auto-generated table and CREATE the VIEW instead.
*/
private readonly string createView = $@"
DROP TABLE V_UserInterests;

CREATE VIEW V_UserInterests AS
SELECT
    UserId,
    Interest
FROM UserInterestEdits
WHERE ActionType = {(short)Action.Add};";
</code></pre>

<p>We now ran the tests again and <em>SUCCESS</em>, they passed!</p>

<p>The only problem we had now was that the bugfix was actually in the area where the user was manipulating the auto-generated interests. We set about writing the tests for those and quickly realized an issue. The other table that is used to create the view was in another database and wasn't actually a part of our Entity Framework DbContext. Given the previous work we had done though, a path forward became clear pretty quickly.</p>

<p>First we modified the <code>createView</code> string in our <code>Setup</code> class to:</p>

<p>private readonly string createView = $@" <br>
DROP TABLE V_UserInterests;</p>

<p>CREATE TABLE AutoGeneratedUserInterests ( <br>
    UserId INT PRIMARY KEY NOT NULL,
    Interest NVARCHAR NOT NULL,
    Score NUMERIC(7, 4) NOT NULL
);</p>

<p>CREATE VIEW V_UserInterests AS <br>
SELECT <br>
    UserId,
    Interest
FROM UserInterestEdits <br>
WHERE ActionType = {(short)Action.Add}</p>

<p>UNION</p>

<p>SELECT <br>
    auto.UserId,
    auto.Interest
FROM AutoGeneratedUserInterests auto <br>
LEFT OUTER JOIN UserInterestEdits edits ON auto.UserId = edits.UserId AND auto.Interest = edits.Interest <br>
WHERE edits.UserId IS NOT NULL <br>
      AND edits.ActionType &lt;> {(short)Action.Delete};";</p>

<p>This creates a 'mocked' auto generated interests table and creates the view as a union between the two tables, masking any auto-generated interests the user wanted to remove. We were then able to create our remaining tests.</p>

<pre><code>[Test]
[Description(@"Given:
                A user with a system generated interest
               When:
                The system generated interest is deleted
               Then:
                The interest should not be returned for that user")]
public async Task RemoveSystemGeneratedInterestFromUser()
{
    // arrange
    const int userId = 1;
    const string systemGeneratedInterest = "nature";

    await CreateSystemGeneratedInterest(userId, systemGeneratedInterest).ConfigureAwait(false);

    // act
    await UsersDal.MergeInterests(userId, new[] { systemGeneratedInterest }, Action.Delete).ConfigureAwait(false);

    // assert
    var interests = await UsersDal.GetUserInterests(userId).ConfigureAwait(false);

    Assert.IsFalse(interests.Any(interest =&gt; string.Equals(interest.Interest, systemGeneratedInterest, System.StringComparison.Ordinal)));
}

[Test]
[Description(@"Given:
                A user with a system generated interest that has been deleted
               When:
                The system generated interest is added
               Then:
                The interest should be returned for that user")]
public async Task AddRemovedSystemGeneratedInterestFromUser()
{
    // arrange
    var userId = 1;
    const string systemGeneratedInterest = "interest";

    await CreateSystemGeneratedInterest(userId, systemGeneratedInterest).ConfigureAwait(false);

    await UsersDal.MergeInterests(userId, new[] { systemGeneratedInterest }, Action.Delete).ConfigureAwait(false);

    // act
    await UsersDal.MergeInterests(userId, new[] { systemGeneratedInterest }, Action.Add).ConfigureAwait(false);

    // assert
    var interests = await UsersDal.GetUserInterests(userId).ConfigureAwait(false);

    Assert.IsTrue(interests.Any(interest =&gt; string.Equals(interest.Interest, systemGeneratedInterest, System.StringComparison.Ordinal)));
}

private async Task CreateSystemGeneratedInterest(int userId, string systemGeneratedInterest)
{
    var sql = $@"INSERT INTO AutoGeneratedUserInterests
                    (UserId, Interest)
                 VALUES
                    ({userId}, '{systemGeneratedInterest}')";

    await DbContext.Database.ExecuteSqlCommandAsync(sql).ConfigureAwait(false);
    await DbContext.SaveChangesAsync().ConfigureAwait(false);

    var systemGeneratedInterests = await UsersDal.GetUserInterests(userId).ConfigureAwait(false);
    Assert.IsTrue(systemGeneratedInterests.Any(interest =&gt; string.Equals(interest.Interest, systemGeneratedInterest, System.StringComparison.Ordinal)));
}
</code></pre>

<p>There is definitely a less than ideal aspect to this approach. The view and external table need to be replicated here and if either of them change, the updates need to be made in this project as well to continue to have the tests be valid. On the other hand, especially as these are unit tests, this approach allows us to mock out the external dependencies and have our tests focus on asserting the validity of their internal logic.</p>]]></content:encoded></item><item><title><![CDATA[npm scripts: Getting Started]]></title><description><![CDATA[<p>Around the time I started focusing my career more on web development, there were a few build engines emerging to help with the build tasks relating to front-end development files. They were <a href="https://gruntjs.com/">grunt</a> and <a href="https://gulpjs.com/">gulp</a>. Most of the discussion at the time was around code vs. configuration and gulp's usage</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/npm-scripts-getting-started/</link><guid isPermaLink="false">c49429fd-4c83-4dae-90b3-b2dd6d963005</guid><category><![CDATA[npm]]></category><category><![CDATA[npm-scripts]]></category><category><![CDATA[javascript]]></category><category><![CDATA[front-end-development]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Fri, 18 May 2018 15:02:11 GMT</pubDate><content:encoded><![CDATA[<p>Around the time I started focusing my career more on web development, there were a few build engines emerging to help with the build tasks relating to front-end development files. They were <a href="https://gruntjs.com/">grunt</a> and <a href="https://gulpjs.com/">gulp</a>. Most of the discussion at the time was around code vs. configuration and gulp's usage of streams, as opposed to writing to disk for each operation, to speed up their builds. I don't remember anyone mentioning <a href="https://docs.npmjs.com/misc/scripts">npm-scripts</a> at the time, nor quite frankly, much in the mean time. Most front-end starter kits include some and in learning about them a little more, I've found them quite useful for many tasks. React, Angular and Aurelia's starter projects all include at least one npm script. Angular seems to use gulp in their build scripts whereas React seems to utilize npm scripts alone and Aurelia appears to use a combination of the two.</p>

<p>At some point, I realized that I could create my own npm-scripts and began to play around with them to start to define where I could gain the most value from them in my day to day development. I found out a number of very useful things along the way. While I've no doubt just started to scratch the surface of this utility, I thought I would share some of my initial discoveries.</p>

<p>By way of introduction, any application that utilizes npm to manage its packages has a <code>package.json</code> file that contains many properties. Some include the project's name, (optionally) authors, license and dependencies. One of the properties(and the focus of this post) is the <code>scripts</code> property. This is an area where you can declare scripts that can call executables by npm for the given project. You give each script a name and then tell it what scripts to execute.</p>

<p>The easiest way to see this is through the following steps: <br>
- <code>mkdir npm-scripts-demo</code>
- <code>cd npm-scripts-demo</code>
- <code>npm init</code> <em>(then accept all of the defaults)</em></p>

<p>Once complete, the <code>package.json</code> that gets created has the following script: <code>"test": "echo \"Error: no test specified\" &amp;&amp; exit 1"</code>. From the command line, you can run <code>npm run test</code> and you should see the script get executed. It will first echo out <code>"Error: no test specified"</code> and since the script then exits with a value of 1, it will also give the user an error message.</p>

<p>As a bit of an aside, but shown in the <code>test</code> script above, with npm scripts, one can run multiple commands in a single script. If the commands are separated by <code>&amp;&amp;</code> they will run sequentially and if they are separated by <code>|</code> or <code>&amp;</code>, depending on the execution environment, they are run concurrently. <em>(NOTE: see <a href="https://stackoverflow.com/questions/30950032/how-can-i-run-multiple-npm-scripts-in-parallel#answer-30950298">this StackOverflow Q&amp;A</a> for a more cross-platform way of doing this)</em> This can be tested out by updating the above script to <code>"test": "echo \"Error: no test specified\" &gt; test.txt | exit 1"</code>. As can be seen when running the script, the error message shows in the console, but the file gets created with the expected text as well.</p>

<p>Now the above example is pretty trite, but provides a good starting point. <code>echo</code> is a global command exposed in most operating systems through their command line. Executing npm scripts will mostly be used to execute commands from installed npm packages. As an example, a common operation to be performed during local development will be to delete a directory containing compiled artifacts before compiling them anew following code changes. This might be <code>.css</code> files that were composed from <code>.scss</code> files or <code>.js</code> files transpiled from <code>.ts</code> files. A common library used to perform this task is <code>rimraf</code>.</p>

<p>To demonstrate: Install <code>rimraf</code> through the command <code>npm install rimraf</code>(this won't save it to your <code>package.json</code> file, but will install the package in the <code>node_modules</code> directory relative to where the install command was executed). To utilize <code>rimraf</code> to delete the test file created above run the command <code>node_modules/.bin/rimraf test.txt</code> in the command line. As can be seen, when npm installs an executable script, it places it in the <code>node_modules/.bin</code> directory. If we wanted to execute that command as an npm script called <code>clean</code>, we would update the scripts property to include the following <code>"clean": "rimraf test.txt"</code> and then we could execute that from the command line with <code>npm run clean</code>. Now I'm sure you're wondering, Why isn't the task declaration exactly like the command line statement we executed above? In truth, we could have written the script as such: <code>node_modules/.bin/rimraf test.txt</code>, but npm scripts places the <code>node_modules/.bin</code> directory in its path, abrogating the need for the more verbose command. I prefer the shorter command as it keeps the scripts section a bit more readable.</p>

<p>There are multiple ways that one can define the order in which scripts run. One method was described above utilizing the <code>&amp;&amp;</code> operator. There is another, which relies on an npm convention that I've found useful as well. By default, if npm is told to run a script(i.e. <code>npm run test</code>), it will look for a task called <code>pretest</code> and if it finds one, it will run that script, then the <code>test</code> script. Upon completion of the <code>test</code> script, it will look for a script with the name <code>posttest</code> and if it finds one, it will run that last. This can be demonstrated by adding: <code>"pretest": "echo \"Hello World!\""</code> to the list of scripts and then executing <code>npm run test</code>. This can be done for any task. Defining a task with the same name prefaced by <code>pre</code> or <code>post</code> will have the same execution effect. I have even used this functionality to inject some file changes to a node_modules package after it got installed through declaring a <code>postinstall</code> task.</p>

<p>All of the above is really great and has really helped me a lot. To take it one step further: There are often times where running a cli command isn't enough for the task at hand. What happens when we need to create more complex scripts and want to execute them using npm scripts? We can create a file that contains all of the code that needs to run and execute that from npm scripts. We can rewrite our <code>clean</code> method above to demonstrate.</p>

<p>Create a <code>clean.js</code> file with the following contents:</p>

<pre><code>var rimraf = require('rimraf');

function onError(err) {
    console.log(err);
}

rimraf('test.txt', onError);
</code></pre>

<p>Update the clean task to: <code>node clean</code></p>

<p>Now when you run the <code>npm run clean</code> script, it will instruct node to execute the <code>clean.js</code> file as a script. This loads and runs <code>rimraf</code> with the arguments provided.</p>

<p>While this is a very simple and not very useful example, it demonstrates the capability. It obviously can be built upon to perform much more complicated actions. I've used this before to compile .scss files with sourcemaps, add a hash to the filename and copy them to the websites root directory.</p>

<p>Hope this helps getting started with npm and being more productive with it from day 1!</p>]]></content:encoded></item><item><title><![CDATA[Multi-themed Web Project: Build scripts]]></title><description><![CDATA[<p>As can be seen from the previous posts, currently, there are a number of files required to get a build working for a new licensed organization. They each have conventions that need to be followed in terms of naming, placement in the project or contents. This can become a bit</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/multi-themed-web-project-build-scripts/</link><guid isPermaLink="false">0404c303-f0da-42db-b2c3-4cd3f9a0a597</guid><category><![CDATA[Semantic UI]]></category><category><![CDATA[multi-themed application]]></category><category><![CDATA[node.js]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Mon, 30 Apr 2018 13:33:36 GMT</pubDate><content:encoded><![CDATA[<p>As can be seen from the previous posts, currently, there are a number of files required to get a build working for a new licensed organization. They each have conventions that need to be followed in terms of naming, placement in the project or contents. This can become a bit of a nightmare when bringing a new organization on board. As a developer, I feel as though it is important to utilize our talents not just for our external customers, but to remember the internal customers as well(and yes, that does even mean we the developers). The next piece I wanted to touch on then, was the creation of some 'tooling' or other scripts that could be used to automate most of the setup. Outside of easing the setup process, it also creates a documented(even if only in code) and reliably repeatable process with which to setup a new organization, saving a lot of headaches down the line.</p>

<p>With the current design and setup, the bare minimum we need to setup a new organization are the organization's name that will be used at runtime to determine the directory to get the css files and two colors that can represent their brand's primary and secondary colors. The <code>build/setupTheme.ts</code> script was created just for that. The script can be run with the following command: <code>node_modules/.bin/ts-node build/setupTheme &lt;THEME_NAME&gt; &lt;PRIMARY_COLOR_NAME&gt;:&lt;PRIMARY_COLOR_VALUE&gt; &lt;SECONDARY_COLOR_NAME&gt;:&lt;SECONDARY_COLOR_VALUE&gt;</code>. The script grabs the arguments passed, does some minor validation on them, does some string replacement on the files in the <code>/build/templates</code> directory and then places them in the correct location in the project. Once those files are in place, the next ui build will generate the appropriate css files and place them along with the rest of the generated files in the organizations <code>dist</code> directory. The code can be seen <a href="https://github.com/PdUi/semantic-ui-build-multiple-themes/blob/master/build/setupTheme.ts">here</a>.With this in place, setting up a new organization on the platform can be done in a matter of moments.</p>

<p>Another common task that should be addressed during any automated build of a front-end project is referred to as 'cache-busting'. Usually this is accomplished through adding a hash or part of a hash value of the file's contents to the file name. As browser's have developed, they do certain things to help us and our websites be faster and more efficient. One of the methods they employ is caching of 'static' files. File types like css, js and images usually fall into this rubric. That way, when the browser loads the html, if it sees a file it has loaded in the past, it can just pull it out of cache instead of reloading it from the server. This is GREAT for us and our sites...if the content in those files hasn't changed. What if we have added new functionality, or fixed a bug though? If the file name remains the same as it was before that update, odds are, your browser will serve up the version of the file it has cached locally instead of getting the updated version from the server. If we add a hash value that has been calculated from the contents of the file itself to the filename, then the browser can still serve up the cached file assuming the contents of the file haven't changed because it will have the same hash value. If the contents have changed though, the updated html page will link to the new filename and the browser not finding that in cache, will request it from the server and the user gets the best of both worlds!</p>

<p>As has been described in a previous post, the project has been setup to utilize the power of Semantic UI's styles framework for the majority of the styles applied and which is augmented by a few project specific <code>.scss</code> files. Since we had to modify the semantic build to provide us with multiple different themed builds, that process had to include adding a hash value to the file names. The custom <code>.scss</code> styles that we include for the project needs this to be a part of their build as well. As such, both processes(<code>build/buildScss.ts</code> and <code>build/cacheBustAndCopySemanticFiles.ts</code>) were updated to include the <code>cacheBuster.ts</code> function. The contents of which are as follows:</p>

<pre><code>import { rename } from 'fs';
import { fromFile, HashaOptions } from 'hasha';
import { basename, dirname, join } from 'path';
import { handleError } from './handleError';

export const addHash: (filePath: string) =&gt; void = (filePath: string): void =&gt; {
    const minCssExtension: string = '.min.css';
    const fileName: string = basename(filePath, minCssExtension);
    const hashaOptions: HashaOptions&lt;'base64'&gt; = { algorithm: 'md5' };

    /* tslint:disable:no-floating-promises */
    fromFile(filePath, hashaOptions)
        .then((hash: string | null) =&gt; {
            const hashFileNameLength: number = 20;
            const hashedFileName: string = `${fileName}.${hash.substr(0, hashFileNameLength)}${minCssExtension}`;
            const directoryPath: string = dirname(filePath);

            rename(
                filePath,
                join(directoryPath, hashedFileName),
                handleError
            );
        });
};
</code></pre>

<p>It uses some standard node libraries to perform some common path and file access functions and then leverages the library <code>hasha</code> to calculate the hash for the file contents.</p>

<p>After these are run, the <code>copyFiles.ts</code> function gets executed which takes the front-end files that have been created and places them in the directory that ASP.NET Core utilizes by default(<code>wwwroot</code>) to serve static files from.</p>

<p>With all of this in place, a <code>clean</code> function was necessary as well. While it is a good practice to have something in place to clean out the directory from all compiled files before a new build, in order to ensure that only the files you most recently built are present, in this instance it was quite necessary. Again, as discussed in a previous post, the website was designed to look for/load the files in a directory that is specific to each institution. Since these files are hashed, it uses pattern matching to load those files instead of being programmed with exact file names. This works well, but if you have artifacts from multiple builds in the same directory, then the results loaded into the browser could be unpredictable as you won't be assured that the latest version of that file is being loaded.</p>

<p>This ends the series focused on how to have a Semantic UI powered front-end with multiple builds in one supporting a set of licensed organizations.</p>]]></content:encoded></item><item><title><![CDATA[Multi-themed Web Project: Front-end]]></title><description><![CDATA[<p>Following the previous posts that addressed some of the issues faced trying to create a multi-themed project in an Angular and ASP.NET Core project, I would now like to start discussing some of the decisions and their implementations that I made regarding the front-end portion of this application.</p>

<p>The</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/multi-themed-web-project-front-end/</link><guid isPermaLink="false">7d1efedc-cc2a-44f1-a227-41803d7c6cb5</guid><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Thu, 07 Dec 2017 21:21:21 GMT</pubDate><content:encoded><![CDATA[<p>Following the previous posts that addressed some of the issues faced trying to create a multi-themed project in an Angular and ASP.NET Core project, I would now like to start discussing some of the decisions and their implementations that I made regarding the front-end portion of this application.</p>

<p>The main obstacle faced to date was the need to be able to create the multiple themes through one build and then reference those built files correctly per organization. Referencing those files correctly per organization has already been discussed in the post addressing the back end concerns. It is being handled through an MVC Controller/View combination which is dynamically routing the user to the appropriate 'index' view based on the URL they are visiting the site through.</p>

<p>This article will focus on how we generate/maintain the theme files that the MVC application is then able to reference. In order to create the per organization theme files, we took a two-pronged approach. For the bulk of the styles, we punted and chose a very popular, flexible, front-end framework called <a href="https://semantic-ui.com/">Semantic UI</a>. For various tweaks or items we felt were 'missing' from the framework, we created a few scss files and a build process around them.</p>

<p>While researching our options in this area, we investigated: <br>
1. Creating all of our own styles from scratch <br>
    - This option seemed a bit too daunting as the team allocated for this project was only two developers, neither of which had previously focused on or specialized in CSS
2. Utilizing Bootstrap <br>
    - This option seemed very attractive as it seems to be a highly utilized/battle-tested framework. BUT, we avoided this for two reasons...
        1. Bootstrap v3 is the stable platform to date, but looking into creating different themes within their framework seemed difficult at best.
        2. Bootstrap v4 seems to be much more malleable in this regard as it is built on SCSS, but it is in alpha and has been for a long while.
3. Utilizing Semantic UI <br>
    - This framework provides a nice way to create themes and a number of hooks to enable further customization, as needed.
    - It is written with LESS and built with gulp which provided us a couple of hooks to enable the types of customizations we were aiming for with this project.</p>

<p>I opted for a mix of options 1 &amp; 3. The last post discussed how I managed to wrangle Semantic UI's build process into producing the css output for multiple themes with one build. As mentioned also in previous posts, while Semantic UI utilizes gulp for its build process, the custom build tasks I wanted to create were relatively small and focused so I decided to rely on executing my custom build tasks through npm scripts.</p>

<p>NPM Scripts can execute node commands. There is a nice npm package called <code>ts-node</code> that allows the developer to write their node scripts in Typescript and execute them as node tasks. This runs the compilation for the Typescript files and then executes the transpiled JavaScript. I enjoy using Typescript whenever I can in place of JavaScript both for the type safety it provides as well as the explorability gains that it gives through its type system.</p>

<p><strong>The code I reference for this post can be found on <a href="https://github.com/PdUi/semantic-ui-build-multiple-themes">GitHub</a>.</strong></p>

<p>As a bare minimum for what I was hoping to achieve, I defined these npm scripts:</p>

<pre><code>"postinstall": "ncp build/override-semantic-ui-build.js src/styles/semantic/tasks/build.js",
"prebuild": "npm run lint",
"build": "ts-node build/buildScss &amp; gulp --gulpfile ./styles/semantic/gulpfile.js build",
"lint": "npm run lint:ts &amp; npm run lint:sass",
"lint:ts": "tslint --type-check --project tsconfig.json --config config/tslint.json build/**/*.ts",
"lint:sass": "sass-lint -c config/sasslint.yml -v"
</code></pre>

<p>As mentioned in a previous post on setting up a Semantic UI build that creates artifacts for multiple themes, the <code>postinstall</code> script overwrites the single build file that needs to be updated to support that functionality. The <code>prebuild</code> task runs all of the linting tasks and is run, by convention, automatically before the build task is executed. These utilize standard linting tools that are well documented <a href="https://github.com/sasstools/sass-lint">sass-lint</a>, <a href="https://palantir.github.io/tslint/">tslint</a>.</p>

<p>The only other task in that list is the <code>build</code> task. The second command in the build script is the one that executes the Semantic UI gulp build task. At a high-level, the basic directory structure for the front end of the application relating specifically to the multi-themed styling is as follows:</p>

<pre><code>/build
/src
  /styles
    /semantic
      /src
        /themes
          /{registered_organization_abbreviation}
            /globals
              site.variables
            theme.config
    /site
      /shared
        /framework
          _colors.scss
        _site.scss
      {registered_organization_abbreviation}.scss
package.json
</code></pre>

<p>The <code>build</code> directory is where I chose to place all of the scripts I've created for the build process. The <code>styles</code> directory as shown is broken into two main sections. The <code>semantic</code> directory contains the semantic specific styles and the location of the theme specific files which follow semantic's naming/placement conventions. The <code>site</code> directory contains all the additional styles that semantic doesn't provide that are specific to the site under construction.</p>

<p>Returning to the build task at hand, the first command that the <code>build</code> script executes is contained in the <a href="https://github.com/PdUi/semantic-ui-build-multiple-themes/blob/master/build/buildScss.ts"><code>build/buildScss.ts</code></a> file.</p>

<pre><code>import { writeFile } from 'fs';
import * as mkdirp from 'mkdirp';
import { render, Result, SassError } from 'node-sass';
import { argv } from 'process';
import { handleError } from './handleError';
/* tslint:disable-next-line */
const value: IThemeJson = require('./themes.json');
const isProductionBuild: boolean = !!argv.find((arg: string) =&gt; arg === 'production' || arg === 'prod' || arg === '-P' || arg === '--prod');

value.themes.forEach((theme: string) =&gt; {
    const outputDir: string = `./styles/dist/${theme}`;

// TODO: Add File Hashing
    mkdirp(outputDir, (error: Error): void =&gt; {
        handleError(error);

        const inputDirRoot: string = `./styles`;
        const inputFilePath: string = `${inputDirRoot}/${theme}.scss`;
        const cssOutputFilePath: string = `${outputDir}/${theme}.min.css`;
        const cssMapOutputFilePath: string = `${outputDir}/${theme}.css.map`;

        render({
                file: inputFilePath,
                outFile: cssOutputFilePath,
                outputStyle: isProductionBuild ? 'compressed' : 'expanded',
                sourceMap: !isProductionBuild
               },
               (sassError: SassError, result: Result): void =&gt; {
                   handleError(sassError);

                   writeFile(cssOutputFilePath, result.css, handleError);
                   if (!isProductionBuild) {
                       writeFile(cssMapOutputFilePath, result.map, handleError);
                   }
               });
    });
});
</code></pre>

<p>As can be seen, it utilizes an npm package called <code>node-sass</code> to process all of the scss files and create the transpiled <code>.css</code> file. As seen in many of the other scripts, it first retrieves the list of licensed organizationa contained in the relevant <code>.json</code> file, loops through each theme, creates an output directory for each organization and then places the transpiled <code>.css</code> files in that directory. The goal is to either pull as much of the styling for the site from semantic's styles or to put as much of the sites styling in the <code>framework</code> directory shown above. Each licensed organization's specific <code>.scss</code> file will then look something like this:</p>

<pre><code>@import './shared/framework/colors';

$primary-color: blue;
$secondary-color: red;

@import './shared/site';
</code></pre>

<p>What this means is that the transpilation done by <code>node-sass</code> will utilize the framework files as the defaults(currently just organizational specific colors, but could be extended to include fonts, images, etc). Then we provide the organization's specific overrides for any/all of those values and after that include the shared site styles. In this manner whatever overrides have been provided will be utilized as the values for the site's variable replacements. This also provides the flexibility to add overrides to the site's styles, if need be, by placing those specific styles after the <code>@import './shared/site';</code> line. Obviously, the more specification that is done at this level abrogates the usefulness of this approach as you end up building more and more of the css for each site specifically, but the power is in your hands to do as you please. The task then outputs the <code>.css</code> files in their respective licensed organization's directory and violà, you now have a project that has multiple css themes built in one command. As this post is a bit longer than I was expecting it to be, I'll follow up with one last post on this subject relating to a few other npm scripts I created to help with setting up new licensed organizations as well as getting these generated css files to play nicely with ASP.NET core and Angular.</p>]]></content:encoded></item><item><title><![CDATA[Semantic UI One Build for Multiple Themes]]></title><description><![CDATA[<p>While Semantic UI does provide a nice way to create themes, I quickly discovered that its build engine is focused on building one custom theme per build/project. It took me a bit of time and a lot of console logging to come up with a modification to it's gulp</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/semantic-ui-multiple-themes-build/</link><guid isPermaLink="false">e271c09f-ff45-4786-bab1-349767dfb8ea</guid><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Fri, 17 Nov 2017 15:32:31 GMT</pubDate><content:encoded><![CDATA[<p>While Semantic UI does provide a nice way to create themes, I quickly discovered that its build engine is focused on building one custom theme per build/project. It took me a bit of time and a lot of console logging to come up with a modification to it's gulp build process that would work for building multiple themes in one step. When Semantic UI is installed via npm, whatever directory you specify for your semantic files will contain a <code>semantic/tasks/build.js</code> file.</p>

<pre><code>/*******************************
          Build Task
*******************************/

var
  // dependencies
  gulp         = require('gulp-help')(require('gulp')),
  runSequence  = require('run-sequence'),

  // config
  config       = require('./config/user'),
  install      = require('./config/project/install'),

  // task sequence
  tasks        = []
;


// sub-tasks
if(config.rtl) {
  require('./collections/rtl')(gulp);
}
require('./collections/build')(gulp);


module.exports = function(callback) {

  console.info('Building Semantic');

  if( !install.isSetup() ) {
    console.error('Cannot find semantic.json. Run "gulp install" to set-up Semantic');
    return 1;
  }

  // check for right-to-left (RTL) language
  if(config.rtl === true || config.rtl === 'Yes') {
    gulp.start('build-rtl');
    return;
  }

  if(config.rtl == 'both') {
    tasks.push('build-rtl');
  }

  tasks.push('build-javascript');
  tasks.push('build-css');
  tasks.push('build-assets');

  runSequence(tasks, callback);
};
</code></pre>

<p>My initial attempt to get the build to work with multiple themes was to modify the above file to the following:</p>

<pre><code>/*******************************
          Build Task
*******************************/

var
  // dependencies
  gulp         = require('gulp-help')(require('gulp')),
  runSequence  = require('run-sequence'),
  print        = require('gulp-print'),
  // config
  config       = require('./config/user'),
  install      = require('./config/project/install'),

  // task sequence
  tasks        = []
;


// sub-tasks
if(config.rtl) {
  require('./collections/rtl')(gulp);
}
require('./collections/build')(gulp);

const orgs = require('../../build/licensed-organizations.json').orgs;
module.exports = function(callback) {
  tasks.push('build-javascript');
  tasks.push('build-assets');
  var lastTaskName = '';

  for(var i = 0; i &lt; orgs.length; i ++) {
    console.info('Building Semantic');
    const org = orgs[i];

    gulp.task(`copy semantic ${org}`, function() {
      console.info(`copy semantic ${org}`);
      return gulp.src(`./orgs/${org}/semantic.json`)
                 .pipe(print())
                 .pipe(gulp.dest('../'));
    });

    gulp.task(`copy theme ${org}`, function() {
      console.info(`copy theme ${org}`);
      return gulp.src(`./orgs/${org}/theme.config`)
                 .pipe(print())
                 .pipe(gulp.dest('./src/'));
    });

    gulp.task(`build css ${org}`, [`build-css`]);

    if( !install.isSetup() ) {
      console.error('Cannot find semantic.json. Run "gulp install" to set-up Semantic');
      return 1;
    }

    tasks.push(`copy semantic ${org}`);
    tasks.push(`copy theme ${org}`);
    tasks.push(`build css ${org}`);
  };

  runSequence(...tasks, callback);
};
</code></pre>

<p>At a high level, it imports the organizations that have been placed in the 'organizations.json' file, iterating over them, creating a uniquely named gulp task for each one and then pushing each task to an array. Once all of the organizations have been iterated over, I'm calling run sequence which should sequentially execute each task and am using the spread operator to pass all of the tasks to that function.</p>

<p>A closer look shows that the tasks attempt to take the <code>semantic.json</code> and <code>theme.config</code> files created for each organization and overwrites the default files from  semantic and then executes the <code>build-css</code> task that the semantic library creators provide to compile all of the less files into the css files that are actually served to the browser.</p>

<p><strong>The problem with this approach is that the build process only seems to use the original semantic.json file that was in place before the build started even though it is successfully getting overwritten.</strong> For instance, in the original semantic.json file, the value for output.packaged is 'dist/'. semantic.json is successfully getting overwritten and the output.packaged value is dist/org1 before the build-css task gets executed, but all of the output files still end up in 'dist/'.</p>

<p>I decided to dig a bit deeper into the build engine for my second approach. The default build files include a <code>semantic/tasks/build/css.js</code> file which contains the tasks relating to the css portion of the Semantic UI build. The file looks as follows:</p>

<pre><code>/*******************************
          Build Task
*******************************/

var
  gulp         = require('gulp'),

  // node dependencies
  console      = require('better-console'),
  fs           = require('fs'),

  // gulp dependencies
  autoprefixer = require('gulp-autoprefixer'),
  chmod        = require('gulp-chmod'),
  clone        = require('gulp-clone'),
  flatten      = require('gulp-flatten'),
  gulpif       = require('gulp-if'),
  less         = require('gulp-less'),
  minifyCSS    = require('gulp-clean-css'),
  plumber      = require('gulp-plumber'),
  print        = require('gulp-print'),
  rename       = require('gulp-rename'),
  replace      = require('gulp-replace'),
  runSequence  = require('run-sequence'),

  // config
  config       = require('../config/user'),
  tasks        = require('../config/tasks'),
  install      = require('../config/project/install'),

  // shorthand
  globs        = config.globs,
  assets       = config.paths.assets,
  output       = config.paths.output,
  source       = config.paths.source,

  banner       = tasks.banner,
  comments     = tasks.regExp.comments,
  log          = tasks.log,
  settings     = tasks.settings
;

// add internal tasks (concat release)
require('../collections/internal')(gulp);

module.exports = function(callback) {

  var
    tasksCompleted = 0,
    maybeCallback  = function() {
      tasksCompleted++;
      if(tasksCompleted === 2) {
        callback();
      }
    },

    stream,
    compressedStream,
    uncompressedStream
  ;

  console.info('Building CSS');

  if( !install.isSetup() ) {
    console.error('Cannot build files. Run "gulp install" to set-up Semantic');
    return;
  }

  // unified css stream
  stream = gulp.src(source.definitions + '/**/' + globs.components + '.less')
    .pipe(plumber(settings.plumber.less))
    .pipe(less(settings.less))
    .pipe(autoprefixer(settings.prefix))
    .pipe(replace(comments.variables.in, comments.variables.out))
    .pipe(replace(comments.license.in, comments.license.out))
    .pipe(replace(comments.large.in, comments.large.out))
    .pipe(replace(comments.small.in, comments.small.out))
    .pipe(replace(comments.tiny.in, comments.tiny.out))
    .pipe(flatten())
  ;

  // two concurrent streams from same source to concat release
  uncompressedStream = stream.pipe(clone());
  compressedStream   = stream.pipe(clone());

  // uncompressed component css
  uncompressedStream
    .pipe(plumber())
    .pipe(replace(assets.source, assets.uncompressed))
    .pipe(gulpif(config.hasPermission, chmod(config.permission)))
    .pipe(gulp.dest(output.uncompressed))
    .pipe(print(log.created))
    .on('end', function() {
      runSequence('package uncompressed css', maybeCallback);
    })
  ;

  // compressed component css
  compressedStream = stream
    .pipe(plumber())
    .pipe(clone())
    .pipe(replace(assets.source, assets.compressed))
    .pipe(minifyCSS(settings.minify))
    .pipe(rename(settings.rename.minCSS))
    .pipe(gulpif(config.hasPermission, chmod(config.permission)))
    .pipe(gulp.dest(output.compressed))
    .pipe(print(log.created))
    .on('end', function() {
      runSequence('package compressed css', maybeCallback);
    })
  ;

};
</code></pre>

<p>I updated the file to:</p>

<pre><code>const console = require('better-console');
const extend = require('extend');
const fs = require('fs');
const gulp = require('gulp');
const autoprefixer = require('gulp-autoprefixer');
const chmod = require('gulp-chmod');
const minifyCSS = require('gulp-clean-css');
const clone = require('gulp-clone');
const concat = require('gulp-concat');
const concatCSS = require('gulp-concat-css');
const dedupe = require('gulp-dedupe');
const flatten = require('gulp-flatten');
const header = require('gulp-header');
const gulpif = require('gulp-if');
const less = require('gulp-less');
const plumber = require('gulp-plumber');
const print = require('gulp-print');
const rename = require('gulp-rename');
const replace = require('gulp-replace');
const uglify = require('gulp-uglify');
const requireDotFile = require('require-dot-file');
const runSequence = require('run-sequence');

const config = require('../config/project/config');
const defaults = require('../config/defaults');
const install = require('../config/project/install');
const tasks = require('../config/tasks');
const banner = tasks.banner;
const comments = tasks.regExp.comments;
const log = tasks.log;
const settings = tasks.settings;
const filenames = tasks.filenames;

const orgs = requireDotFile(`organizations.json`, __dirname).orgs;

module.exports = function(callback) {
    orgs.forEach(org =&gt; {
        const userConfig = requireDotFile(`semantic.${org}.json`, __dirname);
        const gulpConfig = (!userConfig) ? extend(true, {}, defaults) : extend(false, {}, defaults, userConfig);
        const compiledConfig = config.addDerivedValues(gulpConfig);
        const globs = compiledConfig.globs;
        const assets = compiledConfig.paths.assets;
        const output = compiledConfig.paths.output;
        const source = compiledConfig.paths.source;

        const cssExt = { extname: `.${org}.css` };
        const minCssExt = { extname: `.${org}.min.css` };

        let tasksCompleted = 0;
        let maybeCallback  = function() {
            tasksCompleted++;
            if(tasksCompleted === 2 * orgs.length) {
                callback();
            }
        };
        let stream;
        let compressedStream;
        let uncompressedStream;

        console.info('Building CSS');

        if( !install.isSetup() ) {
            console.error('Cannot build files. Run "gulp install" to set-up Semantic');
            return;
        }

        // unified css stream
        stream = gulp.src(source.definitions + '/**/' + globs.components + '.less')
            .pipe(plumber(settings.plumber.less))
            .pipe(less(settings.less))
            .pipe(autoprefixer(settings.prefix))
            .pipe(replace(comments.variables.in, comments.variables.out))
            .pipe(replace(comments.license.in, comments.license.out))
            .pipe(replace(comments.large.in, comments.large.out))
            .pipe(replace(comments.small.in, comments.small.out))
            .pipe(replace(comments.tiny.in, comments.tiny.out))
            .pipe(flatten())
        ;

        // two concurrent streams from same source to concat release
        uncompressedStream = stream.pipe(clone());
        compressedStream   = stream.pipe(clone());

        // uncompressed component css
        uncompressedStream
            .pipe(plumber())
            .pipe(replace(assets.source, assets.uncompressed))
            .pipe(rename(cssExt))
            .pipe(gulpif(compiledConfig.hasPermission, chmod(compiledConfig.permission)))
            .pipe(gulp.dest(output.uncompressed))
            .pipe(print(log.created))
            .on('end', function() {
            runSequence(`package uncompressed css ${org}`, maybeCallback);
            })
        ;

        // compressed component css
        compressedStream
            .pipe(plumber())
            .pipe(clone())
            .pipe(replace(assets.source, assets.compressed))
            .pipe(minifyCSS(settings.minify))
            .pipe(rename(minCssExt))
            .pipe(gulpif(compiledConfig.hasPermission, chmod(compiledConfig.permission)))
            .pipe(gulp.dest(output.compressed))
            .pipe(print(log.created))
            .on('end', function() {
            runSequence(`package compressed css ${org}`, maybeCallback);
            })
        ;
        });

        gulp.task(`package uncompressed css ${org}`, function() {
            return gulp.src(`${output.uncompressed}/**/${globs.components}.${org}${globs.ignored}.css`)
            .pipe(plumber())
            .pipe(dedupe())
            .pipe(replace(assets.uncompressed, assets.packaged))
            .pipe(concatCSS(`semantic.${org}.css`, settings.concatCSS))
                .pipe(gulpif(compiledConfig.hasPermission, chmod(compiledConfig.permission)))
                .pipe(header(banner, settings.header))
                .pipe(gulp.dest('dist/'))
                .pipe(print(log.created))
            ;
        });

        gulp.task(`package compressed css ${org}`, function() {
            return gulp.src(`${output.uncompressed}/**/${globs.components}.${org}${globs.ignored}.css`)
            .pipe(plumber())
            .pipe(dedupe())
            .pipe(replace(assets.uncompressed, assets.packaged))
            .pipe(concatCSS(`semantic.${org}.min.css`, settings.concatCSS))
                .pipe(gulpif(compiledConfig.hasPermission, chmod(compiledConfig.permission)))
                .pipe(minifyCSS(settings.concatMinify))
                .pipe(header(banner, settings.header))
                .pipe(gulp.dest(output.packaged))
                .pipe(print(log.created))
            ;
        });
};
</code></pre>

<p>My thoughts behind this attempt were the following: The original task is setup to find all of the relevant  <code>.less</code> files based on the configuration files. I wanted to retrieve the <code>organizations.json</code> file, iterate through each organization and create a new set of tasks for each one using the configuration files created for each individual theme. I soon found the problem with this approach is that the build process only seems to use the original <code>theme.config</code> file. I tried pointing the build to <code>theme.org1.config</code>, etc, but it doesn't work and doesn't provide any error.</p>

<p><em>I tried a number of iterations on the above attempts with no success. I wasn't willing to give in to the fact that it wasn't possible though without rewriting the whole build process from scratch.</em></p>

<p>I finally found my solution. In the end, I ended up modifying the <code>semantic/tasks/build.js</code> file to what I've included below. Just before the <code>module.export</code>, I'm including the same json file referenced in the last post that contains an array of organizations licensed to use the product. I then created a loop which iterates over each organization and creates three gulp tasks for each.</p>

<ol>
<li>The <code>theme.config</code> gets copied to the location the build engine is expecting to find it in.  </li>
<li>The <code>build-css</code> is called.  </li>
<li>The built css files get copied to a dist location.</li>
</ol>

<p>The for loop then pushes the names of these newly created gulp tasks into an array. After the for loop finishes, the function ends with <code>runSequence(...tasks, callback);</code>. This utilizes the spread operator to place all of the tasks in the tasks array as steps to be run in sequence by gulp. Currently this is the biggest down-side of this approach. As these steps aren't run in parallel, when the number of organizations grows, so does the build time. I tried a few other avenues to have things run in a more parallel approach, but found myself <a href="https://stackoverflow.com/questions/44031816/is-there-a-way-to-build-multiple-semantic-ui-themes-in-the-same-project">debugging semantic's</a> build system <a href="http://forums.semantic-ui.com/t/how-can-i-build-multiple-themes/462">much more</a> than I had wanted to, so I stuck with this approach for now. Due to this limitation and the fact that this particular project only uses the styling from the semantic framework, but not any its javascript plugins, you can see that there are a few tasks I've commented out to speed up the build process a bit.</p>

<pre><code>/*******************************
          Build Task
*******************************/

var
  // dependencies
  gulp         = require('gulp-help')(require('gulp')),
  runSequence  = require('run-sequence'),

  // config
  config       = require('./config/user'),
  install      = require('./config/project/install'),

  // task sequence
  tasks        = []
;


// sub-tasks
if(config.rtl) {
  require('./collections/rtl')(gulp);
}
require('./collections/build')(gulp);

const orgs = require('../../build/licensed-organizations.json').orgs;
module.exports = function(callback) {
  console.info('Building Semantic');

  if( !install.isSetup() ) {
      console.error('Cannot find semantic.json. Run "gulp install" to set-up Semantic');
      return 1;
  }

  tasks.push('build-assets');

  for (var i = 0; i &lt; orgs.length; i++) {
    // check for right-to-left (RTL) language
    // if(config.rtl === true || config.rtl === 'Yes') {
    //     gulp.start('build-rtl');
    //     return;
    // }

    // if(config.rtl == 'both') {
    //     tasks.push('build-rtl');
    // }

    // tasks.push('build-javascript');

    /*
        replace tasks.push('build-css');
    */
    const org = orgs[i];

    gulp.task(`copy theme ${org}`, function() {
      return gulp.src(`./src/themes/${org}/theme.config`)
                 .pipe(gulp.dest('./src/'));
    });

    gulp.task(`build css ${org}`, [`build-css`]);

    gulp.task(`copy output ${org}`, [`build css ${org}`], function() {
      return gulp.src(`./temp/**/*.css`)
                .pipe(gulp.dest(`./dist/${org}`));
    });

    tasks.push(`copy theme ${org}`);
    tasks.push(`copy output ${org}`);
  }

  // runSequence(tasks, callback);
  runSequence(...tasks, callback);
};
</code></pre>

<p>This all took me a tremendous amount of time(way more than I was planning on) and frustration, but also left me with a great amount of satisfaction. A few key things that really helped me along the way were to remember that with npm, the actual files that are being run are copied to your computer as uncompiled artifacts. This makes going in to them and altering them very easy. While I'm sure there are better ways to debug through gulp tasks utilizing Visual Studio Code, I am unaware of how to do that, so I utilized the tried and true method of <code>console.log</code> statements in combination with a number of <code>JSON.stringify</code> calls.</p>

<p>Having gotten the build to work in the way I wanted it to, there was still the problem of creating this process in such a way that it could be a reproducable build process. It needed to work for my machine as well as on anyone else's machine(including the build server) that wanted to pull down the solution from source control and work on it. For this piece I used a bit of trickery that I've picked up regarding npm scripts.</p>

<p>I plan on writing a bit more about npm scripts in a later post, but for now, after one pulls down a JavaScript codebase, it is very common practice to have to run the <code>npm install</code> command. This tells the node package manager to pull down all the packages that you have defined as requirements for your code to work. You can define a <code>postinstall</code> script which npm will be run once the installation of all packages is complete. I defined mine as follows <code>"postinstall": "ncp build/templates/semantic-ui-build.override.js src/styles/semantic/tasks/build.js"</code>. <code>ncp</code> is an npm module used for copying files. All this does is take the modified file shown above(which I placed in a <code>build/templates</code> directory) and copies it to the place where I specified for the installation of my semantic files(<code>src/styles</code>). Once this is done, when the semantic build task gets executed(<code>gulp --gulpfile ./src/styles/semantic/gulpfile.js</code>) our override is in place and the multiple themes will be compiled.</p>

<p>The last caveat I will add is that this was written given the current version of the semantic-ui npm package(<code>2.2.13</code> at the time of this writing). It is very possible that semantic-ui will publish a newer version with updates to their build process that could break this override. Currently when you run <code>npm install semantic-ui --save-dev</code> it will add that package to your package json with the version <code>^2.2.13</code> which means that on your build server, npm will pull down the latest minor release that matches that version. Therefore, npm will pull down <code>2.2.14</code> or <code>2.3.0</code> automatically on a fresh install if the Semantic UI team as published one of those versions as the latest package version to their package feed, but it wouldn't pull down <code>3.0.0</code> if that is the latest package. The most straight forward way to mitigate the issue with the override would be to simply update the version of semantic-ui package in your <code>package.json</code> to be <code>2.2.13</code>. That tells npm to pull that package version specifically. Then when there are newer versions you can install them(<code>npm install semantic-ui@2.2.14 --save-dev &amp;&amp; npm run postinstall</code>) specifically and test to make sure the process still works for the latest semantic ui version.</p>]]></content:encoded></item><item><title><![CDATA[Multi-themed Web Project: Back-end]]></title><description><![CDATA[<p>As mentioned in my introductory post on this topic, there are three main areas that I see specific concerns regarding approaching this type of a problem.</p>

<ol>
<li>The database and its data: its access/security.  </li>
<li>The server side portion of the application.  </li>
<li>The client side portion of the application.</li>
</ol>

<p>I outlined</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/multi-themed-web-project-back-end/</link><guid isPermaLink="false">8c8ec21f-b0c8-4520-b627-e57ca334b6f9</guid><category><![CDATA[Angular]]></category><category><![CDATA[aurelia-cli]]></category><category><![CDATA[ASP.NET Core]]></category><category><![CDATA[multi-themed application]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Fri, 21 Jul 2017 15:56:16 GMT</pubDate><content:encoded><![CDATA[<p>As mentioned in my introductory post on this topic, there are three main areas that I see specific concerns regarding approaching this type of a problem.</p>

<ol>
<li>The database and its data: its access/security.  </li>
<li>The server side portion of the application.  </li>
<li>The client side portion of the application.</li>
</ol>

<p>I outlined the high level decisions I made regarding the database in the last post and hope to address, a little more in depth, the server side decisions/implementations in this post.</p>

<p>Due to the fact that the company this application is being developed for is a '.NET shop', the decision was made to use ASP.NET Core. We could have just as easily utilized the full ASP.NET Framework and MVC 5.*, with the same basic techniques. The future of .NET development seems to be in Core and being that this was a greenfield project with minimal dependencies(all of which have a .NET Core dll), it seemed like the perfect place and time to dip our toes into this brave new world.</p>

<p>Here is the code for a bare bones MVC controller that would be provided with one of Visual Studio's built-in templates.</p>

<pre><code>public class HomeController : Controller
{
    public IActionResult Index()
    {
        return View();
    }
}
</code></pre>

<p>The default conventions will cause the engine to look for/route-to/render the view file that would be found at <code>~/Views/Home/Index.cshtml</code>. Now if one wanted to be more explicit, then they could <code>return View("Index");</code> instead. What is interesting about this is that the argument passed in is just a string. The views are dynamically compiled and if one wanted to, they could do something like <code>return View("About");</code> just as easily. As long as the view engine finds a view at <code>~/Views/Home/About.cshtml</code>, then everything will continue to execute properly and when the user explicitly or implicitly goes to the <code>/Home/Index</code> route, they will be shown the 'About' page. Taking this one small step forward, the string that is passed into the View method could include '/' which will allow the view rendering engine to navigate into directories as well to locate it's view.</p>

<p>With this in mind, we made the decision to create a directory under Home called "LicensedOrganizations". The reason we created a dedicated directory for this was to enable us to have a directory ignore rule in place in our <code>.gitignore</code> file to exclude this directory. As will soon be discussed, the .cshtml files that will end up in this directory will be created by the build machine, based on the <code>index.html</code> file output by Angular's cli. While all of the JavaScript files will remain the same regardless of the accessing application, the look and feel of each licensed organization will be different due to the differing linked style sheets, favicon files and logo files. At the moment, there hasn't been a need for it, but this also provides us the flexibility of creating custom .cshtml file templates that could provide a branded skeleton for each 'site', including custom navigation headers and footers that match the rest of the licensed organization's other websites.</p>

<p>Most of the front-end concerns for this application will be discussed in another post, but the one area that seems to cross this boundary are the initial view pages that the ASP.NET engine will be relied on to serve. Since this piece of the equation has already been mentioned, I figured I could continue to elaborate on it while glossing over(for now) all of the rest of the front-end build processes. The structure of a default Angular application created by the cli will look as follows: <br>
<code>/node_modules
/src
    /app
    /environments
    favicon.ico
    index.html
    main.ts
    polyfills.ts
    styles.css
    tsconfig.app.json
    typings.d.ts
.angular-cli.json
.editorconfig
.gitignore
package.json <br>
tsconfig.json <br>
tslint.json</code></p>

<p>The build process can be invoked through executing the command <code>node_modules/.bin/ng build</code>. The default cli project will have a bit of a short-hand for that command already wired up in the npm scripts section of the <code>package.json</code> file. The same action can be achieved then through the command <code>npm run build</code>. Upon invoking the cli's build process, an additional directory will be made as a sibling of <code>src</code> called <code>dist</code>.</p>

<p>The <code>dist</code> directory will contain all of the compiled js files(the css files are included in the main.*.js file) as well as an <code>index.html</code> file containing the proper script links to those compiled js files. This last piece is key. When running the build in 'prod' mode, a hash value is added to each of the created js files that is meant to help with 'cache-busting'. This is great, but is a bit unpredictable as theoretically the hash is based on the compiled Angular js files, not the source typescript files. Those script tags are inserted right before the closing <code>body</code> tag on the page. What I did at this point was to insert a few comment tags(e.g. <code>&lt;!--FAVICON--&gt;</code>, <code>&lt;!--STYLES--&gt;</code> and <code>&lt;!--LOGO--&gt;</code>). I then created a small js script and placed it in a special build directory.</p>

<pre><code>import { readdir, readFile, writeFile } from 'fs';
const value: IThemeJson = require('./licensed-organizations.json');

value.orgs.forEach(org =&gt; {
    const cssFilesDirectoryPath = `./dist/styles/${org}`;
    createStylesLinks(org, cssFilesDirectoryPath, createIndexFile);
});

function createStylesLinks(org: string, dir: string, cb: (org: string, stylesLinks: string) =&gt; void) {
    readdir(dir, (err, directoryItems) =&gt; {
        var stylesLinks = '';
        directoryItems.forEach((directoryItem, $index, $array) =&gt; {
            if (directoryItem.startsWith(org) &amp;&amp; directoryItem.endsWith('.min.css')) {
                stylesLinks += `&lt;link type="text/css" rel="stylesheet" href="./styles/${org}/${directoryItem}" media="all" /&gt;`;
            }

            if ($array.length - 1 === $index) {
                cb(org, stylesLinks)
            }
        });
    });
}

function createIndexFile(org: string, stylesLinks: string) {
        const indexTemplatePath = `./dist/index.html`;
        const outputDir = `./dist`;

        readFile(indexTemplatePath, 'utf8', (err, indexTemplateContents) =&gt; {
        if (err) return console.error(err);

        var organizationIndexContents = `@{Layout=null;}\r\n${indexTemplateContents}`
            .replace('&lt;!--FAVICON--&gt;', `&lt;link rel="shortcut icon" href="./assets/icons/${org}.ico" /&gt;`)
            .replace('&lt;!--STYLES--&gt;', stylesLinks)
            .replace('&lt;!--LOGO--&gt;', `&lt;img class="app loading logo" src="./assets/logos/${org}.png" /&gt;`);
        var organizationIndexPath = `${outputDir}/${org}.cshtml`;

        writeFile(organizationIndexPath, organizationIndexContents, 'utf8', err =&gt; {
            if (err) return console.error(err);
        });
    });
}
</code></pre>

<blockquote>
  <p>This script is written in Typescript and is run using an npm package called ts-node. I plan to go into more detail with this in the post on the front-end.</p>
</blockquote>

<p>This small script loops through all entries in the <code>licensed-organizations.json</code> file(more to come on this in the next installment) and replaces those comment tags in the <code>index.html</code> with the desired branding information for that organization and then creates a .cshtml file for that organization and places it in the <code>dist</code> directory. Yes, I know, ASP.NET Core by default will look for its views in its <code>Views</code> directory. There are ways to change this default behavior, but in the end, we decided just to copy those files into our <code>Views/Home/LicensedOrganizations</code> directory at a later step in the build process, as well as place most of the other files from the <code>dist</code> directory into the <code>wwwroot</code> directory, which is where ASP.NET Core, by default, will look for all static files to be served by the application.</p>

<p>In the <code>package.json</code> file, I then added another npm script called <code>postbuild</code> with a value of <code>ts-node build/createIndexFiles.js</code>. If the reader isn't familiar with node scripts, one is able to define that one script should run before or after another script by prefacing the script name with <code>pre</code> or <code>post</code>. With the npm script that we just defined, when we execute <code>npm run build</code> now from the command line, if there is a script named <code>prebuild</code>(there isn't at this point), it would get executed, when it completes the npm script named <code>build</code> will get executed and upon completion of that script, a script named <code>postbuild</code>(if one is defined) will then get executed. With what we have defined, ng build will get called and upon completion of that script our custom script shown above will execute, creating a .cshtml file for each organization and place it in the <code>dist</code> directory.</p>

<p>Now with the custom index files being built, the only remaining question becomes, how do we know in the server-side portion of the application, which view to route the user to. As mentioned in the previous post, this is relatively simple as each organization is going to route to the application through a unique url. In the HttpContext that comes along with every request, we can access the URL through <code>context.Request.Host.Host</code>. In our instance, we save each organizations configured URL in a database along with other organization specific information. We perform some caching so this lookup doesn't have to happen for every request and also create a new ClaimsPrinciple object upon a new visit that is passed along with the request as part of the cookie. This is one of the factors that enables us to setup a demo site where an internal employee can demonstrate the power of the system by demonstrating multiple different client implementations.</p>

<p>With that we have the back-end concerns taken care of for a single web application to serve individualized themes per accessing organization.</p>

<hr>

<p><strong>UPDATE</strong></p>

<p>Upon completion of this build process, I found a <a href="http://michaco.net/blog/Angular4GettingHashedWebpackBundlesWorkingInASPNETCoreMVC">blog post</a> that discusses the ability in ASP.NET Core to include wild-cards in script include in a View. He has a more full example there, but my <code>createIndexFile</code> method mentioned above can largely go away given this approach. Instead of having to get these values at build time through reading the file names, one can leverage TagHelpers in ASP.NET Core to handle this for you.</p>

<p>To take advantage of this, in your view, you should add:</p>

<pre><code>@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

&lt;environment names="Development,Production"&gt;
    &lt;script type="text/javascript" asp-src-include="~/inline.*.bundle.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" asp-src-include="~/polyfills.*.bundle.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" asp-src-include="~/styles.*.bundle.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" asp-src-include="~/vendor.*.bundle.js"&gt;&lt;/script&gt;
    &lt;script type="text/javascript" asp-src-include="~/main.*.bundle.js"&gt;&lt;/script&gt;
&lt;/environment&gt;
</code></pre>

<p><em>NOTE</em>: The only catch with this approach is that you need to ensure that your build/deploy process cleans out the previously built *.js files before adding the new ones or your application will break. I think this is a best practice in any case, but just wanted to make the reader aware of this pitfall.</p>]]></content:encoded></item><item><title><![CDATA[Aurelia cli and bootstrap-datepicker]]></title><description><![CDATA[<p>This is a bit of a long post, but it is one I squeezed in due to a very interesting back and forth I had with 'Jerry T' the other day in the comments to my blog post entitled <a href="https://peinearydevelopment.azurewebsites.net/aurelia-and-bootstrap-datepicker-using-typescript-and-jspm">Aurelia and bootstrap-datepicker using typescript and jspm</a>. Two main themes were</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/aurelia-cli-and-bootstrap-datepicker/</link><guid isPermaLink="false">a575e2a5-f304-40fa-8033-b566ad7e19f8</guid><category><![CDATA[aurelia]]></category><category><![CDATA[aurelia-cli]]></category><category><![CDATA[bootstrap-datepicker]]></category><category><![CDATA[typescript]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Thu, 06 Jul 2017 22:45:03 GMT</pubDate><content:encoded><![CDATA[<p>This is a bit of a long post, but it is one I squeezed in due to a very interesting back and forth I had with 'Jerry T' the other day in the comments to my blog post entitled <a href="https://peinearydevelopment.azurewebsites.net/aurelia-and-bootstrap-datepicker-using-typescript-and-jspm">Aurelia and bootstrap-datepicker using typescript and jspm</a>. Two main themes were touched upon: Aurelia vs. Angular(which I hope to write about in a later post), Aurelia-cli vs Aurelia's skeleton projects.</p>

<p>One of the challenges that comes with doing front-end development these days revolves around the breakneck speed of change. Even if you have settled on a framework, they are constantly improving the frameworks and creating releases for those improvements. That is <strong><em>AWESOME</em></strong>, but at the same time can be <strong>VERY</strong> challenging to deal with. Documentation that was there one day, might be irrelevant or completely gone the next day. Trying to figure out what information(blogs, StackOverflow Q&amp;As) is relevant to the specific version you are working with can be extremely difficult and knowing when and how to update these libraries in order to not fall too far behind is also a balancing act.</p>

<p>For my day job, the CTO of the company wanted to use Angular*new* for their projects. I have utilized and recommended Aurelia in the past and have tried to use it for a personal project that I'm in the middle of, in order to keep current with it. There are a number of things I really like and dislike about both Aurelia and Angular.</p>

<p>As of this writing, the above post was created 8 months ago. A lot has changed since then. First, I noticed that Jerry was utilizing the Aurelia skeleton projects as his starting point. To be honest, 8 months ago, that's what I was using as well. The cli was pretty new, raw and frustrating to use. Looking at it now though, there hasn't been a new release of the skeletons for ~7 months. At first glance, it looked like the repository for the skeleton projects was active. I was about to 'pen' a comment recommending that if one wanted to use the skeleton projects, they should probably clone the repo and start from there. On further inspection though, while still showing some signs of activity, the repository doesn't seem to have had any meaningful changes made to it in months as well.</p>

<p>On the other hand, the cli had a release a week ago. The official Aurelia documentation gives both the cli and the skeletons space, but it seems as though the cli is the framework's preferred choice for success. I remember initially that one of the biggest hurdles I had with the cli was integrating third-party libraries. While the documentation was there, it never seemed to 'just work' the way I would have expected. I recommended that Jerry give the cli a try and he suggested I write a post about the cli ;). While this isn't that post exactly, I decided to revisit third-party library integration and the cli and I figured, what better way than to provide a part II to the bootstrap-datepicker integration post.</p>

<p>As it turns out, much to my surprise, this was relatively easy. I'll spell out the steps I took, annoyances encountered and some thoughts on the cli along the way.</p>

<p>To start, I created a new folder on my desktop <code>au-cli</code>. Obviously, this can be whatever you want it to be, but that name was sufficient for me.</p>

<p><mark><em>This is my opinion and many would disagree with me:</em> You can install the cli globally, but definitely given the rate of change, I like to install the cli locally(<code>npm install aurelia-cli</code>) every time I start a new project so I know I'm getting the latest release. Since I don't install it globally, a lot of my npm commands will look like <code>node_modules/.bin/au ...</code>. If you have the cli installed globally(<code>npm install -g aurelia-cli</code>, you should run these commands as <code>au ...</code> instead.</mark></p>

<p>Given the above note, in the <code>au-cli</code> directory, I opened my command prompt(git bash shell actually) and ran <code>npm install aurelia-cli</code>. Again, you can skip this if you have it installed globally.</p>

<p>Then to create a new project execute: <code>node_modules/.bin/au new third-party-test</code>. I called it third-party-test, but again, call the project whatever suits you best. This will give you a list of options you must choose from for it to configure the project properly. Here are the options I chose:</p>

<ul>
<li>Loader? -> 1. RequireJS</li>
<li>Default or custom setup? -> 3. Custom</li>
<li>Transpiler? -> 2. Typescript</li>
<li>Template setup? -> 3. Maximum minification</li>
<li>Css Preprocessor? -> 3. Sass</li>
<li>Configure unit testing? -> 1. Yes</li>
<li>Default code editor? -> 1. VSCode</li>
<li>Create project? -> 1. Yes</li>
<li>Install dependencies? -> 1. Yes</li>
</ul>

<p>Once it is done with the install, this was the first annoyance I encountered. The command prompt looked as follows: <br>
<img src="https://peinearydevelopment.azurewebsites.net/content/images/2017/07/au-cli_new_end.png" alt=""></p>

<p>It has a nice message and the 'Happy Coding!' at the end would seem to indicate that the setup is done and it is time to get down to business, but it doesn't exit and display the cursor. I waited a while, expecting it to do just that when it was completely done, but it never did. Instead, you have to press <code>Ctrl + c</code> to get your cursor back.</p>

<p>Then I navigated into the project directory <code>cd third-party-test</code>. As in the previous post, I tried to run a start command to make sure everything was working and I got an error message. I thought that I didn't have the command correct, so I opened the <code>project.json</code> file to see what I should be invoking and found the scripts section missing completely. I assume this is done by design, but why? I would think it would be nice to have a barebones build, run, watch and test command in there to help get started. Since there isn't, I added one to get started: <br>
<code>"scripts": {
    "watch": "au run --watch"
  },
</code></p>

<p>Now I ran <code>npm run watch</code>, launched a browser and navigated to <code>http://localhost:9000</code>. Assuming everything works you should see the 'App works!' message on the screen. While the skeletons come with a few example files, the cli assumes you know what you are doing and only comes with the bare minimum Aurelia project. In a similar vein, it doesn't come with Bootstrap and jQuery integrated by default.</p>

<p>At this point it is time to add the third-party libraries. Looking at the official documentation, the cli seems to be <a href="http://aurelia.io/hub.html#/doc/article/aurelia/framework/latest/the-aurelia-cli">fairly well documented</a>(though it seems to be out of date). Looking through it, there is a section dedicated to <a href="http://aurelia.io/hub.html#/doc/article/aurelia/framework/latest/the-aurelia-cli/10">Adding Client Libraries to Your Project</a>. Before I followed that blindly though, I ran <code>node_modules/.bin/au help</code> in my command prompt. Low and behold I saw this in the console: <br>
<img src="https://peinearydevelopment.azurewebsites.net/content/images/2017/07/au-cli_help.png" alt=""></p>

<p>Before just trying out the highlighted commands, I tried to find the documentation for those commands on Aurelia's site and much to my surprise, found <strong><em>NONE</em></strong>. I decided to give the cli version a try anyway. <br>
Running the command <code>node_modules/.bin/au install bootstrap-datepicker jquery</code> installs those packages and updates the <code>package.json</code> file. The console then prompted me to help configure the css files needed that it found in the installed packages. At this time I chose option 2 which didn't configure any of the css files as my original post didn't include them either. <em>At a later step I do add one of them in.</em> Again, at this point, the cli doesn't exit and return the cursor(now I'm a little peeved as this seems by design, not just an oversight and I'm not sure what the benefit of this 'feature' is).</p>

<p>Now for the code changes: <br>
I created the file <code>src/resources/attributes/datepicker.ts</code> with the contents:</p>

<pre><code>import {DOM, customAttribute, inject} from 'aurelia-framework';
import 'bootstrap-datepicker';

@customAttribute('datepicker')
@inject(DOM.Element)
export class DatepickerCustomAttribute {
    private value: Date;

    constructor(private element: Element) {
    }

    public attached() {
        let datepickerOptions: DatepickerOptions = { autoclose: true, format: 'yyyy-mm-dd' };

        $(this.element)
            .datepicker(datepickerOptions)
            .on('changeDate', evt =&gt; {
                this.value = evt.date;
            });
    }

    public detached() {
        $(this.element).datepicker('destroy');
    }
}
</code></pre>

<p>Updated <code>src/resources/index.ts</code> to:</p>

<pre><code>import {FrameworkConfiguration} from 'aurelia-framework';

export function configure(config: FrameworkConfiguration) {
  config.globalResources(['./attributes/datepicker']);
}
</code></pre>

<p>Updated <code>src/app.ts</code> to:</p>

<pre><code>import {bindable} from 'aurelia-framework';

export class App {
  @bindable public date = null;

  public dateChanged(newValue, oldValue) {
    console.log('new:' + newValue);
    console.log('old:' + oldValue);
  }
}
</code></pre>

<p>Updated <code>src/resources/index.html</code> to:</p>

<pre><code>&lt;template&gt;
  &lt;input datepicker.two-way="date" type="date" /&gt;
&lt;/template&gt;
</code></pre>

<p>Now when I ran <code>npm run watch</code> again, there are a few errors thrown by gulp, but the project continues to successfully build(is this by design? if so, why? if there are build errors, shouldn't execution stop?) and launch. Navigate to <code>http://localhost:9000</code> and you can see the datepicker showing in the UI. Pretty easy and cool!</p>

<p>I wanted to get the build errors to go away which was easy. In the install step before, I neglected to include the typings files. The old blog post talks about running <code>typings install...</code>, but npm and the typescript/typings world have evolved as well and the typings files can be installed with npm now. Run <code>npm install --save-dev @types/jquery @types/bootstrap-datepicker</code> and when you run the project again, you will see that the build errors are now gone.</p>

<p>One more thing I thought I should add for completeness was one of the datepicker stylesheets. As is, the datepicker appears, but it doesn't look very good. As seen above, the cli has an <code>install</code> command and an <code>import</code> command. I believe the intent of the import is for when one installs a dependency through npm and then realizes that it needs to be hooked into the Aurelia build. This could be done as the documentation states by updating the <code>aurelia_project/aurelia.json</code> file by hand, but I ran <code>node_modules/.bin/au import bootstrap-datepicker</code> and it provided me with the same prompt as before to help include the css files in the bootstrap-datepicker npm package in this project. This time I chose 1.(I want choose which css files I need). It then displays the css files it found in that package, I chose to include 2(dist/css/bootstrap-datepicker.min.css). I ran the application again and, <em>sad-trumpet</em>, the datepicker doesn't appear any differently. When I used the developer tools, I can see that the css file is included in the <code>vendor-bundle.js</code> file, but the styles aren't displaying. In order to get them to show up, I had to add the requisite <code>require</code> tag to the application. I updated the <code>src/app.html</code> to this:</p>

<pre><code>&lt;template&gt;
  &lt;require from="bootstrap-datepicker/dist/css/bootstrap-datepicker.min.css"&gt;        &lt;/require&gt;
  &lt;input datepicker.two-way="date" type="date" /&gt;
&lt;/template&gt;
</code></pre>

<p>Since watch was running, when I saved it, the app recompiled and refreshed the browser and the styles were now applied.</p>

<p>I hope this helps. Happy coding!</p>]]></content:encoded></item><item><title><![CDATA[Multi-themed Web Project]]></title><description><![CDATA[<h3 id="thetaskathand">The task at hand</h3>

<p>I was tasked with creating a website for a company where they wanted to sell licenses to utilize their software to other organizations and each organization should have the ability to 'skin' the UI to match their organizational brand. The idea being, the data that this</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/multi-themed-web-project/</link><guid isPermaLink="false">dec3f628-b90c-425a-98d8-1eea85a445c3</guid><category><![CDATA[Angular]]></category><category><![CDATA[Semantic UI]]></category><category><![CDATA[ASP.NET Core]]></category><category><![CDATA[multi-themed application]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Wed, 05 Jul 2017 21:01:12 GMT</pubDate><content:encoded><![CDATA[<h3 id="thetaskathand">The task at hand</h3>

<p>I was tasked with creating a website for a company where they wanted to sell licenses to utilize their software to other organizations and each organization should have the ability to 'skin' the UI to match their organizational brand. The idea being, the data that this company owns is what lends most of the value to its software. Each organization that purchases a license to utilize the software is really interested in the data it provides, but would like its users to see that data displayed in a manner consistent with their internal organizational brand. There are a few different approaches that could be taken to this problem. I hope to detail the one I've taken and some of the decisions I've made along the way to enable this path.</p>

<h3 id="initialthoughts">Initial thoughts</h3>

<p>To begin, my goal was to create one code-base that would be able to dynamically load a view and/or stylesheets per licensing organization. The company which this application was being developed for is a .NET shop and as each licensing organization wants to utilize its brand, the understanding is that each licensing organization would set up a DNS entry to point at one of our IP addresses. That being the case, we could have easily setup multiple websites in IIS, each with its own binding and code base. This could be accomplished still utilizing a number of code sharing techniques such as git branches or specialized code bundling steps upon completion of code compilation. The thought of how easily each of these code branches could get out of sync as well as the api/db considerations that would come along with any modifications were the biggest discouraging factors with that approach.</p>

<p>To accomplish these aims, required some thought and planning on the back-end of the website as well as the front-end. For this project we are utilizing the following(high-level) technology stack: <a href="https://angular.io/">Angular</a>, <a href="http://sass-lang.com/">SCSS</a>, <a href="https://semantic-ui.com/">Semantic-UI</a>, <a href="https://docs.microsoft.com/en-us/aspnet/core/">ASP.NET Core</a>, MS Sql and <a href="https://www.visualstudio.com/team-services/">Visual Studio Team Services</a>.</p>

<h3 id="backenddesigndecisions">Back-end design decisions</h3>

<p>Starting at the data access layer, I've seen different approaches to a similar type of setup. There are those that would prefer to have a separate set of databases for each client. Ostensibly this is to keep the data sets smaller, queries faster and data segregated due to privacy concerns. Most of the information that the licensing organization was exposing was going to be through a publicly accessible website and there were potential requirements down the line to create a combined site for multiple clients. These pieces in mind, the decision was made to keep all of the various clients' data in one set of sql tables, requiring a concerted effort regarding data access layer logic to ensure that only the proper data get returned per organization and user.</p>

<p>Moving up a layer, the decision was made to utilize ASP.NET Core MVC and a special build step to enable the dynamic loading of the initial landing page based on the organization being served. The default build with the Angular CLI produces an index file with the bundled JavaScript files injected as script tags into the HTML. This file is then utilized as the template for all of the organizational specific landing pages which have their skin information injected into it as well. The organization is tracked as part of the security principal and the main landing page is routed to through a convention based naming schema that is utilized throughout the project for a variety of organization specific files. This structure even allowed us to create a demo landing page for sales where they could choose from a list of organizations to demonstrate the power of the skinning ability provided with the product.</p>

<h3 id="frontenddesigndecisions">Front-end design decisions</h3>

<p>The main front-end considerations to date have revolved around the css frameworks/preprocessors that are being utilized as well as a few custom build steps. As the application develops, the expectation will be that some of the HTML components will be dynamically revealed/hidden based on organizational settings, but to date, those particular issues haven't been addressed.</p>

<p>Semantic-UI was chosen as the css framework due to the <a href="https://semantic-ui.com/usage/theming.html">powerful theming capabilities</a> it presents out of the box. As will be discussed in a later post, the build step that enabled this to work as intended was a bit more involved than I had hoped. I'm using NPM scripts to perform most of the front-end tasks. Semantic-UI provides its build steps through Gulp and it took a bit of trickery to get the multiple themes to build with one build step. The team also decided to utilize SCSS as a css preprocessor for all the additional css rules we had to provide to that were site specific or needed to fill the 'gap' in some of Semantic's stylesheets.</p>

<p>This is an ongoing, exciting project that has really enabled me to test my creativity and get to know a few of the tools I use regularly much more intimately. I look forward to sharing some of the discoveries I've had along the way in upcoming posts.</p>]]></content:encoded></item><item><title><![CDATA[Writer's Block]]></title><description><![CDATA[<p>I've been working on quite a number of things lately that I've found very exciting and have found it quite frustrating that I haven't <del>found</del> made the time to write about them.</p>

<p>While this post is really sparse, I'm utilizing it to put out the things I hope to blog</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/writers-block/</link><guid isPermaLink="false">cf9760f9-75bf-4954-8ee7-9e6144af3e8c</guid><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Tue, 20 Jun 2017 19:03:16 GMT</pubDate><content:encoded><![CDATA[<p>I've been working on quite a number of things lately that I've found very exciting and have found it quite frustrating that I haven't <del>found</del> made the time to write about them.</p>

<p>While this post is really sparse, I'm utilizing it to put out the things I hope to blog about in the near future as a kick in the pants to actually write about them, or break the writer's block.</p>

<p>I've started a few new projects.</p>

<ul>
<li><p>One application is supposed to be accessed by multiple clients each of which would like to have a custom look-and-feel for their users in line with their corporate brand.</p>

<ul><li>The application is utilizing:
<ul><li>AngularJS4</li>
<li>ASP.NET Core</li>
<li>Semantic-UI</li>
<li>Entity Framework Core</li>
<li>Azure Search</li>
<li>VSTS</li></ul></li>
<li>I've created an npm script that allows me to build multiple Semantic UI themes at once.</li>
<li>I've created a few additional custom build scripts(some of which I've found I don't actually need) and have learned a lot in the process.</li></ul></li>
<li><p>Another application is a pro-bono project I'm working on for a cause I believe in.</p>

<ul><li>The application is utilizing:
<ul><li>Aurelia</li>
<li>ASP.NET Core</li>
<li>SCSS</li>
<li>Entity Framework Core</li>
<li>Azure Blob Storage</li></ul></li></ul></li>
</ul>

<p>Both of these have pushed my limits in different ways and I hope to be able to share those experiences and bruises to help pave the way forward for others.</p>]]></content:encoded></item><item><title><![CDATA[StringBuilder vs. string.join]]></title><description><![CDATA[<p>During an 'in-person' code review at a company, the developer whose code was under review made a comment that got me very curious. It was in regards to these lines of code:</p>

<pre><code>StringBuilder dateList = new StringBuilder();

foreach (string date in lstSelected.Items)
{
     dateList.AppendFormat("{0},", date);
}

return dateList.ToString().TrimEnd(</code></pre>]]></description><link>https://peinearydevelopment.azurewebsites.net/stringbuilder-vs-string-join/</link><guid isPermaLink="false">d542b2f8-50cc-4214-8d02-ff1b80ae7112</guid><category><![CDATA[c#]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Tue, 14 Mar 2017 21:13:03 GMT</pubDate><content:encoded><![CDATA[<p>During an 'in-person' code review at a company, the developer whose code was under review made a comment that got me very curious. It was in regards to these lines of code:</p>

<pre><code>StringBuilder dateList = new StringBuilder();

foreach (string date in lstSelected.Items)
{
     dateList.AppendFormat("{0},", date);
}

return dateList.ToString().TrimEnd(',');
</code></pre>

<p>Looking at the code, I had to take a moment to understand what it was doing(<em>if it doesn't jump out at you, it is creating a comma delimited list</em>). I had commented that I would prefer using <code>string.Join(",", lstSelected.Items)</code> as it is more concise and readable to me. I also prefer to utilize built in functions whenever possible as I assume the guys at Microsoft are pretty smart and that the utilities they provide are most likely more battle-tested than something I would come up with on my own. My assumption was that this developer was unaware of this utility in the c# library and would be happy to take advantage of it when brought to his attention.</p>

<p>Quite to the contrary, the developer responded that he was used to dealing with buffers and the way he has it is much more efficient and allocates much less memory.</p>

<p>Assuming the developer knew his stuff and wouldn't make such a statement otherwise, I stated that the one line I presented was more readable and the potential efficiencies gained through the developer’s method weren’t worth the loss of readability. This comment was made with the knowledge that this was in a method that would be joining at most 100 strings and wouldn’t be run frequently. The developer however wouldn't budge and I didn't feel it was worth continuing the conversation, so we continued with the code review and went back to our respective workstations.</p>

<p>All that being said though, I was curious. I wanted to try to test the efficiencies or lack thereof regarding string.Join vs. the StringBuilder.</p>

<p>I created two very basic tests. I wanted to launch them both independently while I had some profiler tools running to see if I could notice a difference between the two methods.</p>

<pre><code>[TestClass]
public class UnitTest1
{
    List&lt;string&gt; list;
    const int NUMBER_OF_STRINGS_TO_JOIN = 1000;

    [TestInitialize]
    public void Initialize()
    {
        list = new List&lt;string&gt;();

        for (var i = 0; i &lt; NUMBER_OF_STRINGS_TO_JOIN; i++)
        {
            list.Add(Guid.NewGuid().ToString());
        }
    }

    [TestMethod]
    public void TestMethod1()
    {
        var a = string.Join(",", list);
    }

    [TestMethod]
    public void TestMethod2()
    {
        var sb = new StringBuilder();
        foreach (var item in list)
        {
            sb.Append(item).Append(",");
        }
        var a = sb.ToString().TrimEnd(',');
    }
}
</code></pre>

<p>I fired up JetBrains DotMemory and DotTrace to enable some profiling, but as I created the tests and started running them it seemed as though I wasn’t seeing any significant difference between the two methods. I increased the NUMBER<em>OF</em>STRINGS<em>TO</em>JOIN constant value until I came to 10,000,000. This is where something interesting happened. Both tests failed. I took a screen shot of the test window error messages:</p>

<p>As can be seen from the output of <code>TestMethod1</code>, the method utilizing <code>string.Join</code>, appears as though it uses <code>StringBuilder</code> under the covers. <br>
<img src="https://peinearydevelopment.azurewebsites.net/content/images/2017/01/TM1.png" alt=""></p>

<p>Comparing this with the output of <code>TestMethod2</code>, I concluded that they were essentially doing the same exact thing. One took one line and was much more readable at a glance while the other consumed 6 lines and took fellow developers a bit more to realize the code's intent. <br>
<img src="https://peinearydevelopment.azurewebsites.net/content/images/2017/01/TM2.png" alt=""></p>

<p>I shared this with the developer and much to my surprise he still stuck to his guns. I was disappointed by this, but also took the opportunity to remind myself once again that I need to continually be revisiting my coding patterns and practices to see if I really understand what’s going on under the hood. I didn’t know how <code>string.Join</code> was really operating. <em>I don't think we as developers <strong>NEED</strong> to know how every method we use functions under the covers.</em> When we are presented with an opportunity though, we should utilize it to learn, develop and grow. I was really concerned that not only had I been using, but promoting for a while, a pattern that wasn’t terribly efficient. Now I feel much more comfortable continuing to reuse this pattern and encourage others to do the same. More importantly though, I walked away with a better understanding of one of the myriad tools we as developers utilize on a regular basis. We can't always change those around us, but we can take every opportunity provided to us to encourage our own personal growth as well as those around us.</p>]]></content:encoded></item><item><title><![CDATA[Logging: Extensions for ease of use]]></title><description><![CDATA[<p>As discussed previously, the objective of this project is to create a uniform centralized way of handling event logging across our applications. Instead of building a logging framework from scratch, the decision was made to create a wrapper around a publicly available library(NLog) that can allow us to tap</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/logging-extensions-for-ease-of-use/</link><guid isPermaLink="false">2f3668d4-d486-4737-909a-8f16fc5d5684</guid><category><![CDATA[nlog]]></category><category><![CDATA[logging]]></category><category><![CDATA[c#]]></category><category><![CDATA[framework]]></category><category><![CDATA[Architecture]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Wed, 01 Mar 2017 16:22:40 GMT</pubDate><content:encoded><![CDATA[<p>As discussed previously, the objective of this project is to create a uniform centralized way of handling event logging across our applications. Instead of building a logging framework from scratch, the decision was made to create a wrapper around a publicly available library(NLog) that can allow us to tap into the power of the open source community while making minor modifications specific to the needs of Peineary Development. A previous post discussed the core project that was created to wrap NLog.</p>

<hr>

<p>I wanted to take one more post to at least briefly document the rest of the projects that are included in library and their intended use.</p>

<h3 id="contracts">Contracts</h3>

<p>These really could have been discussed with Core. The contracts project really contains all of the classes that might be used externally from the Logging project to interact with the Logging project. Any of our projects that want to log some information will need to utilize one of the classes provided by the contracts project to do so. Also, utilizing a micro-service architecture, we setup a micro-service that can handle the actual ingestion of and storing of events to a data store. That project expects only to receive an object type defined in this contracts project.</p>

<p>For the initial implementation, we only anticipated/created/exposed three event types. </p>

<ul>
<li><code>ActionTakenEvent</code> - An event that could be logged anytime an action was taken in our system. For instance, an object was created, deleted, updated or an export/import occurred.</li>
<li><code>LogEvent</code> - This is a more generic type of log event. This could be used to throw any message on the log, be they informational or application errors. This is the type that is used by most of the projects in the Logging solution as the intent is to globally handle errors in a uniform way across our applications.</li>
<li><code>LoginEvent</code> - An event that is logged upon a user logging into any of our applications. The intent is to be able to track usage.</li>
</ul>

<h3 id="nlogconsole">NLog.Console</h3>

<p>As can be seen with the rest of the solutions, there is very little code as each is meant to encapsulate one purpose. The <code>NLog.Console</code> project has the default <code>Configure</code> method which, when called, configures logging for a desktop application. It needs to be called at the main entry point of the application and when done, it will make the LogManager available throughout the application as well as setup a handler for any otherwise unhandled exceptions the application might throw through its execution.</p>

<h3 id="nlogweb">NLog.Web</h3>

<p>This is very similar to <code>NLog.Console</code> except it doesn't setup a handler for any otherwise unhandled exceptions the application might throw through its execution. That setup is handled in the Http and MVC projects as there are different plugin points for each project type.</p>

<h3 id="nlogwebhttp">NLog.Web.Http</h3>

<p>As can be seen by looking at the code, as we go into the projects that have more lengthy namespace, we are getting to projects that are more specialized and therefor contain much less code. This is to enable users to only be able to pull in what they need and nothing more. <code>Nlog.Web.Http</code> contains the few classes that are specific to setting up logging in an ASP.NET Web.Api project. There is a <code>GlobalErrorLogger</code> that taps into a hook in the request/response pipeline that enables handling of any exception that is thrown in the application and is otherwise unhandled. The <code>Configure</code> method on the <code>WebApiLoggerConfigurer</code> then allows this setup to occur as would be done with other configurable objects on <code>Application_Start</code> as is commonly done with the route configurations and others.</p>

<h3 id="nlogwebmvc">NLog.Web.Mvc</h3>

<p><code>Nlog.Web.Mvc</code> contains the few classes that are specific to setting up logging in an ASP.NET MVC project. There is a <code>GlobalErrorLogger</code> that taps into a hook in the request/response pipeline that enables handling of any exception that is thrown in the application and is otherwise unhandled. The <code>Configure</code> method on the <code>WebMvcLoggerConfigurer</code> then allows this setup to occur as would be done with other configurable objects on <code>Application_Start</code> as is commonly done with the route configurations and others.</p>

<h3 id="nlogwebhttpwebactivatornlogwebmvcwebactivator">NLog.Web.Http.WebActivator &amp; NLog.Web.Mvc.WebActivator</h3>

<p>Both of these projects tap into the power of the <a href="https://github.com/davidebbo/WebActivator"><code>WebActivatorEx</code> NuGet package</a>. This package enables the above two hooks to be registered in the request/response pipeline of the application without the need for the developer to actually call the <code>Configure</code> methods in the <code>Application_Start</code> method. This is really powerful in that it can reduce configuration mistakes and enable ease and consistency of use.</p>

<h3 id="sandbox">Sandbox</h3>

<p>As a very rudimentary way of creating some 'manual' tests for this solution, I have created three projects in the <code>Sandbox</code> directory. These were just ways for me to demonstrate to myself that the Logger would work as expected and provide examples for other developers to know how to pull in and configure the logger in their projects. Unfortunately, they can't be run automatically by a build system and as programmed, require certain setups to be done on the developer's box for them to work, such as having <code>RabbitMQ</code> installed.</p>

<p>While this set of libraries has a number of things that are specific to Peineary Development, my hope in publishing this post is that others would be able to find some benefit in seeing how we utilize and wrap the <code>NLog</code> library to utilize its code-based configuration, how we were able to extend it through creating a custom <code>MassTransitTarget</code> and how we were able to utilize <code>WebActivatorEx</code> to automatically and consistently configure these logging capabilities across our projects with minimal setup and configuration effort.</p>]]></content:encoded></item><item><title><![CDATA[Logging: Designing the Core Wrapper]]></title><description><![CDATA[<hr>

<p><em>Brief Recap</em>
The objective of this project is to create a uniform centralized way of handling event logging across our applications. Instead of building a logging framework from scratch, the decision was made to create a wrapper around a publicly available library(<a href="http://nlog-project.org/">NLog</a>) that can allow us to tap into</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/logging-designing-the-core-wrapper/</link><guid isPermaLink="false">cb8b94c6-c884-4ad4-8f2f-e5548c27315f</guid><category><![CDATA[c#]]></category><category><![CDATA[framework]]></category><category><![CDATA[logging]]></category><category><![CDATA[nlog]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Fri, 20 Jan 2017 18:25:39 GMT</pubDate><content:encoded><![CDATA[<hr>

<p><em>Brief Recap</em>
The objective of this project is to create a uniform centralized way of handling event logging across our applications. Instead of building a logging framework from scratch, the decision was made to create a wrapper around a publicly available library(<a href="http://nlog-project.org/">NLog</a>) that can allow us to tap into the power of the open source community while making minor modifications specific to the needs of Peineary Development.</p>

<hr>

<h3 id="nlog">NLog</h3>

<p>The NLog project provides a very robust way to manage loggers as well as their ways and means of logging. It provides an internal logger(I know, meta!) that handles logging events specific to the logger. For instance, it logs when any given logger is started up.</p>

<p>The NLog project provides the means of configuring the logger through an XML file or through code. I tend to prefer code over configuration files, so I've created this project to provide a default setup for the configuration of NLog across our applications.</p>

<h5 id="target">Target</h5>

<blockquote>
  <p>Targets are used to display, store or pass log messages to another destination. NLog can dynamically write to one or multiple targets for each log message.</p>
</blockquote>

<p>A target is an object that contains information relevant to the way and means a given event should be logged. The default target for NLog is a <code>NullTarget</code> which does nothing. It completely throws the event away. They also provide a <code>ConsoleTarget</code>, which prints event information to the Console and a <code>FileTarget</code> which writes event information to a file on disk. A full listing of their targets can be found on their <a href="https://github.com/nlog/NLog/wiki/Targets">wiki</a>.</p>

<p>There is also a great story surrounding extending NLog through creating <a href="https://github.com/NLog/NLog/wiki/How%20to%20write%20a%20custom%20target">custom targets</a>.</p>

<h3 id="peinearydevelopmentframeworkloggingnlog">PeinearyDevelopment.Framework.Logging.NLog</h3>

<h5 id="core">Core</h5>

<p>This project provides the essential pieces for setting up logging in one of our applications. </p>

<h6 id="loggerconfigurer">Logger Configurer</h6>

<p>The LoggerConfigurer’s Configure method is the main entry point for setting up a logger for one of our applications. This uses a <code>FileTarget</code> to log its events.</p>

<p><em>The default location for any file log created through a <code>FileTarget</code> is located in <code>C:/logs</code>, but can be configured through setting the value for the AppSetting with the key <code>Logging.LogsPath</code>.</em></p>

<p>The InternalLogger logs when any logger starts up as well as when a logger target fails along with the exception and what target it falls back to following that failure.</p>

<p>This method also creates a target manager, the default targets as well as the rules that applies to each as described below.</p>

<h6 id="targetmanager">Target Manager</h6>

<p>The target manager is responsible for creating a 'wrapper target' of type <code>FallbackTarget</code> as well as properly creating and prioritizing the targets contained within the fallback target for every event that the logger is responsible for logging as well as handling the lifetime of those loggers. The <code>GenerateDefaultTarget</code> method takes an array of target types to attempt in the order they should be attempted.</p>

<h6 id="loggingrules">Logging Rules</h6>

<p>A logging rule, defines which logger should log what and when. A log rule defines at what level and above(Log levels listed in descending order: <code>Fatal</code>, <code>Error</code>, <code>Warn</code>, <code>Info</code>, <code>Debug</code>, <code>Trace</code>) a given target should log an event and through which logger the event should be processed.</p>

<p><em>The default log level is set to <code>Warn</code>, but is configurable through setting the value for the AppSetting with the key <code>Logging.LogLevel</code>.</em></p>

<p>Any event sent to the LogManager will go through all of the rules to determine which rule(s) apply to it. There is a property on the rule called <code>Final</code> which will break that chain. If <code>Final</code> is set to <code>true</code>, then as soon as an event hits that rule and is applicable to that rule, that rule will process the event and no other rules will be evaluated for their applicability to that event.</p>

<h6 id="customtarget">Custom Target</h6>

<p>A custom target that we developed is the <code>MassTransitTarget</code>. This target utilizes the MassTransit project to publish events it is given to a messaging queue. This will allow the application to publish events to the queue and forget about them.</p>

<p><em>The url for the message queue to publish to needs to be configured through setting the value for the AppSetting with the key <code>Logging.Endpoint</code>. The username and password for connecting to RabbitMQ(the default messaging queue used behind our systems) will default to username: <code>guest</code> and password: <code>guest</code> which are RabbitMQ defaults for ease of development. In other environments, those values should be overridden through the values for the AppSettings with keys <code>Logging.Username</code> and <code>Logging.Password</code> respectively.</em></p>

<p>There will then be a project containing a listener for these events which will contain the logic relating to if, how, when and where to persist the events received. <br>
The default target generated for our loggers is of the type <code>FallbackGroupTarget</code> which takes an array of targets and utilizes them in such a way that it will try the first one first and only try the next one in line if the first one fails. Our default fallback order is: <code>MassTransitTarget</code> and then <code>FileTarget</code>.</p>

<h6 id="logmanagerextensions">Log Manager Extensions</h6>

<p>To ease proper use of this library, we also created a few log manager extensions that properly generate a <code>LogEventInfo</code> object and handle which Logger a given type is logged to as well as at what level.</p>

<p>While this article has described how we designed our core pieces of the logging wrapper, a subsequent post will be written to discuss further projects we created to ease their use even more across our projects, regardless of whether they are Console, WebApi or MVC web apps.</p>

<p>The code for this post can be found on <a href="https://github.com/PdFramework/Logging/tree/master/NLog.Core">GitHub</a>.</p>]]></content:encoded></item><item><title><![CDATA[Logging: The Problem]]></title><description><![CDATA[<p>One of the companies I've worked for is a fairly small development shop. One of their applications was a fat-client, WinForms application that has had little to no overarching project architectural work done to it. In helping rethink the architecture of their solution(s) and related data structures, one thing</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/logging-the-problem-2/</link><guid isPermaLink="false">b3ae4fac-f1d2-4a94-8327-3da62d2fb77a</guid><category><![CDATA[c#]]></category><category><![CDATA[framework]]></category><category><![CDATA[logging]]></category><category><![CDATA[nlog]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Fri, 06 Jan 2017 21:16:15 GMT</pubDate><content:encoded><![CDATA[<p>One of the companies I've worked for is a fairly small development shop. One of their applications was a fat-client, WinForms application that has had little to no overarching project architectural work done to it. In helping rethink the architecture of their solution(s) and related data structures, one thing the team noticed immediately was the haphazard and non-uniform way the application approached error handling and message logging.</p>

<p>Uniform and proper error handling and message logging can go a long way, when problems start to occur, to quickly identify and hone in on the problematic areas of code as well as the related data causing the issue. Without having a uniform approach, we quickly found ourselves spending a lot of time trying to locate and reproduce bugs that were causing our users frustration. </p>

<p>One of the more common methods utilized in the application to handle this message logging was to send an email to a party in the organization affiliated with that area of the application(of course the email addresses were stored in the config files). As you can imagine, this was difficult to maintain as people moved around and most of the emails began to be ignored as the volume of them increased over time. There were a number of issues where as we started to dig into them we would hear, "Oh, yeah, we get an email every time that occurs, but we just ignore it. Things seemed to keep functioning correctly." Of course, there are a number of ways to mitigate some of the above issues(create a distribution list email to send to instead of an individual, store those values in a database, routinely review the inboxes to find recurring errors), and if all of the errors were handled this way, I might have advocated to mitigate before creating a different approach. Another consideration though was if we wanted to get trace logging or log informational messages, emailing all of them didn't seem to be a great approach. A nicety to have as well would be to dynamically be able to change the level of logging output from any given application as the need arises.</p>

<p>I started to dig-in to creating a uniform approach to logging as well as a guide on exception handling for our applications. I wanted the solution to be something we could utilize across desktop and web applications and I wanted to provide a standard setup that could be dropped into any project and allow the developer to start utilizing the library with little or no additional setup. I also wanted to be able to extend the framework as the need arose and allow for overriding of the default configuration provided.</p>

<p>I started looking into the options available and decided that I didn't want to write my own from scratch. There were a number of reasons for this. <br>
1. Why reinvent the wheel? <br>
2. While more generic libraries tend to be larger and more clunky than custom solutions, they are also more battle-tested and hardened in a lot of ways. <br>
3. Generally you can find a solution that has a lot of flexibility. Due to the broader audience, it needs to provide this flexibility to remain relevant. <br>
4. This was probably one of the biggest factors in my decision: DOCUMENTATION!!!</p>

<p>Yes, you read it right, documentation. You can read <a href="https://peinearydevelopment.azurewebsites.net/software-documentation-the-beauty-and-the-beast">my thoughts on the documentation</a> . In a nutshell, I think its important, but also realize the difficulty in providing quality, up-to-date documentation for the software we right. Being provided with a majority of the documentation on the core library is a big plus in my book. That allows me to spend more time standardizing and documenting the good patterns and practices I would like to see utilized across the organization regarding how we utilize these libraries.</p>

<p>After looking around for different options for libraries in this space for the .NET landscape, I chose to go with <a href="http://nlog-project.org/">NLog</a>. It seemed like a very robust option with decent documentation that would allow me to realize the above stated goals. In subsequent posts, I hope to detail some of the decisions I made, document the wrappers created and how we plan on utilizing them across our organization.</p>

<p>The code for these posts will be located on a <a href="https://github.com/PdFramework/Logging">GitHub repo</a>.</p>]]></content:encoded></item><item><title><![CDATA[Aurelia and bootstrap-datepicker using typescript and jspm]]></title><description><![CDATA[<p>I find that developing front-end applications utilizing the Aurelia framework is really a pleasure. The one thing that I struggle with though is the popularity of Angular 2.0, the ecosystem around it and the lack that seems to be there for Aurelia.</p>

<p>Developing in the modern world of Javascript</p>]]></description><link>https://peinearydevelopment.azurewebsites.net/aurelia-and-bootstrap-datepicker-using-typescript-and-jspm/</link><guid isPermaLink="false">244fd628-d94c-40d2-9caa-5db1a181e9ac</guid><category><![CDATA[jspm]]></category><category><![CDATA[aurelia]]></category><category><![CDATA[bootstrap-datepicker]]></category><category><![CDATA[typescript]]></category><dc:creator><![CDATA[PeinearyDevelopment]]></dc:creator><pubDate>Fri, 04 Nov 2016 18:07:05 GMT</pubDate><content:encoded><![CDATA[<p>I find that developing front-end applications utilizing the Aurelia framework is really a pleasure. The one thing that I struggle with though is the popularity of Angular 2.0, the ecosystem around it and the lack that seems to be there for Aurelia.</p>

<p>Developing in the modern world of Javascript can be quite challenging. To effectively work on a front-end application, I find the need to be familiar with npm, jspm, html, css, javascript(multiple different flavors: es5, emerging specs like es6, es7, etc.), gulp, aurelia, typescript and typings. While I realize that some of these are optional or replaceable with other similar libraries, the idea is still the same. One needs to be able to know a variety of languages and their nuances as well as all the supporting technology. The languages are constantly evolving as well as the supporting ecosystems, not least of which is the variability regarding browser implementations.</p>

<p>While this is true of any healthy language, it seems to be particularly fast-paced in the world of javascript. I believe some of the issues could be resolved with better tooling around all of these libraries and have seen a nice push forward with that using Visual Studio Code. Another area of improvement should come in the area of documentation. This particular problem seems to be quite significant in the javascript ecosystem. There are tons of small modules, many of which do similar things. How does one know what is out there and how to choose between the various options.</p>

<p>While I feel as though Aurelia does a pretty decent job with documenatation, I realize that it is a hard problem and am unsure how to suggest a way forward. As I struggle getting third-party libraries working with Aurelia or fight through other implementation issues, I first try to search for answers. I then look to StackOverflow. If all else fails, I try to fight through it by trying to make educated guesses about the issue and solution. I have tried the Gitter channel which by and large hasn't been too helpful and if I need to, I give up and try to find another tool that might solve the problem and the cycle starts all over again.</p>

<p>Why is it so hard? They are trying to provide guidance and documentation on how to utilize the framework with a variety of different back ends and multiple different package managers, build tools and languages(really varieties of the same language, but which can be quite distinct at times) on the front end. There are a number of good places to start, but I often find that they are utilizing a bit of a different stack which doesn't fully get me to where I need to go.</p>

<p>I would like to document here the solutions I've come up with in the hope of adding to Aurelia's ecosystem and potentially providing solutions for others so that they won't need to struggle through the same things I have for as long.</p>

<p>I've had a need in multiple applications to provide the user with a datepicker. While creating <code>&lt;input type="date" /&gt;</code> in a page that is displayed in Chrome will automatically provide the user with a nice datepicker, unfortunately that doesn't exist in all of the browsers. There is a third-party library that seems to handle all browsers pretty nicely and that is <a href="https://github.com/uxsolutions/bootstrap-datepicker">bootstrap-datepicker</a>.</p>

<p>After working on it for a while, here are the steps I took to get it working with the Aurelia framework:</p>

<p>I started with the <a href="https://github.com/aurelia/skeleton-navigation/releases/">skeleton-typescript</a>. At the time of writing, the latest version is 1.0.3. This utilizes gulp for the front end task runner, jspm as the UI package manager/module loader as well as npm for package management relating to tasks run by the task runner and typescript as the front-end development language.</p>

<hr>

<p><strong>NOTE</strong>: One issue I've run into utilizing typescript to work on Aurelia applications is that anytime I need to utilize jQuery to get a third-party library to function correctly, there is a typings conflict with the angular-protractor typings file(they both export <code>$</code>). At the moment, I'm not creating any e2e tests for my applications, so my solution is to remove two lines from the <code>typings.json</code> file, namely, "angular-protractor" and "aurelia-protractor" from the "globalDevDependencies" section. I would like to find a better long-term solution, but at the moment, this is sufficient for my needs.</p>

<hr>

<p>The Aurelia website says you have to run three commands to get all of the dependencies installed for your project: <code>npm install</code>, <code>jspm install</code>, <code>typings install</code>. Really if you look at the <code>package.json</code> file though, under the scripts sections, you will notice one that looks like this <code>"prepublish": "./node_modules/.bin/typings install"</code>. If you add another one under it with this: <code>"postinstall": "./node_modules/.bin/jspm install"</code>, then you will only have to run the <code>npm install</code> command.</p>

<p>As a precaution, I usually run gulp watch after this so I can make sure everything install properly and the project runs before I start my updates.</p>

<p>Now, for the bootstrap-datepicker specifics:</p>

<p>There are a number of commands that need to be run from the command line to get the necessary packages/typing files.</p>

<p><code>jspm install npm:bootstrap-datepicker</code></p>

<p><code>typings install dt~bootstrap-datepicker --save --global</code></p>

<p><code>typings install dt~jquery --save --global</code></p>

<hr>

<p><strong>NOTE</strong>: You don't need to <code>jspm install jquery</code> because that is included in the skeleton.</p>

<hr>

<p>Create a file for your "custom attribute"(I called mine <code>datepicker.ts</code> and placed it in <code>src/custom-attributes</code>) with the following contents:</p>

<pre><code>import {DOM, customAttribute, inject} from 'aurelia-framework';
import 'bootstrap-datepicker';

@customAttribute('datepicker')
@inject(DOM.Element)
export class DatepickerCustomAttribute {
    private value: Date;

    constructor(private element: Element) {
    }

    public attached() {
        let datepickerOptions: DatepickerOptions = { autoclose: true, format: 'yyyy-mm-dd' };

        $(this.element)
            .datepicker(datepickerOptions)
            .on('changeDate', evt =&gt; {
                this.value = evt.date;
            });
    }

    public detached() {
        $(this.element).datepicker('destroy');
    }
}
</code></pre>

<hr>

<p><strong>NOTE</strong>: If you didn't remove the lines in <code>typings.json</code> file relating to the 'protractor' typings files, you will get an error with the above code. Again, this is do to the conflicting <code>$</code> exports between 'angular-protractor' and 'jquery' typings files. To fix this, you can update your <code>typings.json</code> file and then delete the directory at <code>typings/globals/angular-protractor</code> or you can simply comment out line 1839 <code>declare var $: cssSelectorHelper;</code> at the file located <code>typings/globals/angular-protractor/index.d.ts</code>.</p>

<hr>

<p>Just to provide a fully working example, I updated <code>welcome.html</code> to the following:  </p>

<pre><code class="language-html">    &lt;template&gt;
      &lt;require from="./custom-attributes/datepicker"&gt;&lt;/require&gt;

      &lt;div datepicker.two-way="date"&gt;&lt;/div&gt;
    &lt;/template&gt;
</code></pre>

<p><strong>UPDATE</strong>
<em>As noted by Jerry T(THANKS!) in a comment, the datepicker doesn't hide and show by default the way one would expect if the html is as stated above, with the binding being on a div element, but the datepicker still allows a user to pick a date and the binding correctly changes. For the more full set of functionality to work, utilize the <code>welcome.html</code> below.</em></p>

<pre><code class="language-html">    &lt;template&gt;
      &lt;require from="./custom-attributes/datepicker"&gt;&lt;/require&gt;

      &lt;input datepicker.two-way="date" type="date" /&gt;
    &lt;/template&gt;
</code></pre>

<p>And <code>welcome.ts to</code> the following:</p>

<pre><code>import {bindable} from 'aurelia-framework';

export class Welcome {
  @bindable public date = null;

  public dateChanged(newValue, oldValue) {
    console.log('new:' + newValue);
    console.log('old:' + oldValue);
  }
}
</code></pre>

<hr>

<p><strong>NOTE</strong>: This should work as is. I wanted to provide a fully functioning example. I have utilized other third-party tools as well in other components. One of the one's I used was select2. When I got it all setup and working, both components didn't seem to be able to render on the same page. Digging into it a bit more, it seems to be a problem with different jquery versions being required between the two libraries. I have yet to find a satisfactory long-term solution to this. At the moment, the way that I get this to work is through deleting a few lines from the <code>config.js</code> file after executing <code>jspm install</code>. In the "map" section, I looked for bootstrap-datepicker and select2 and removed the lines containing their jquery dependencies. These get put back everytime one runs <code>jspm install</code>, so as I noted, it isn't a long term solution, but it does work for now.</p>

<hr>

<p>Hope this helps!</p>]]></content:encoded></item></channel></rss>