RazorSPoint
RazorSPoint

ESPC17 – Discussions in Between

Sebastian SchützeSebastian Schütze
This entry is part 3 of 4 in the article series European SharePoint Conference 2017

 Talk with Waldek Mastykarz (MVP, Rencore)

In our company I have a colleague who has several problems with ALM and SPFx he could not figure out. I lay down these problems since it would be interesting for everybody. What I write down here are the results of the discussions.

Multiple CDN for one SPFx Extension Package

Getting right to the point, the problem with the packaging is, that you usually have multiple stages, where you want to deploy your SPFx package too. This is getting a problem when you are using CDNs. Want he ended up doing, was to create one package for each stage in one build definition.

And why is that?

Because in solution packaging (during build time) is the only way to inject the CDN URL into the SPFx package. If you want to do it later (in release) you would have to open the package, find multiple places, where the URL is set (there are multiple places) and then replace the strings and zip the package together again.

So, Why not do just that in the build?

Best practice states, that you do not want to care about environment dependent parameters at build time. You want to care about that on the release. The release should have all the environment specific parameters in place. Furthermore, you do not want to make it to complicated. Three builds means, that you need to make sure you are releasing the right package to the right release. You need to create extra tasks and conditions to make sure of that. The complexity so going to the roof.

What is the solution?

I had the talk already one day before the new SPFx announcements. And the main announcement was, that the files that should be uploaded to the CDN are now integrated into the SPFx package. In detail, this means you don’t have to care to which CDN the files get uploaded. This is taken care of after the upload to the app catalog. The app catalog is then uploading it to the connected CDN of your tenant.

There is only one catch for now: This works only for O365 native CDN. If you want to use your own CDN (e.g. Azure CDN), you do not have that advantage anymore. So the only solution (to still keep best practices) would then be to open the package on release time and replace the CDN URL strings.

SPFx Build on VSTS is failing when packaging

The other problem includes, that the packaging of the solution fails on the build agent. When packaging a solution, you have the option to set “skipFeatureDeployment” to “true”. This enables you to have more control over where this feature is deployed. It would be set to “false”, then this feature would be available automatically everywhere. This is shown in the snippet below.

{
  "$schema": "https://dev.office.com/json-schemas/spfx-build/package-solution.schema.json",
  "solution": {
    "name": "sdb-mcs-teamsites-customizer-client-side-solution",
    "id": "0632b472-dc9a-471b-be65-a54b2275de85",
    "version": "1.0.0.0",
    "skipFeatureDeployment": true
  },
  "paths": {
    "zippedPackage": "solution/teamsites-customizer.sppkg"
  }
}

When you package the solution on your client, then you get the following warning

Warning – TeamSitesApplicationCustomizer: Admins can make this solution available to all sites in the organization, but extensions won’t automatically appear. SharePoint Framework extensions must be specifically associated with sites, lists, and fields programmatically to be visible to site users.

This is no problem on the client site. But when you execute this build agent with VSTS, then the build fails directly after packaging. If you set this flag to “false” again, the warning disappears and the build end successfully.

Why is a warning causing the build to fail?

This I needed to research myself. If I have a deeper look into the SPFx node_modules we find, that the warning string comes from the module “sp-build-core-tasks” in the library module “packageSolution” of the file “validateSolutionDefinition.js” in the lines 15-18.

if (component.manifest.componentType === 'Extension') {
   logWarning(component.manifest.alias + ": Admins can make this solution available to all sites in the"
      + " organization, but extensions won\u2019t automatically appear. SharePoint Framework extensions must"
      + " be specifically associated to sites, lists, and fields programmatically to be visible to site users.");
   hasWarning_1 = true;
}

This is not the problem. The message seems to be logged correctly as a warning. But where does this message go?

Update: I made a false conclusion. The text above is still correct. But the text below is wrong. Everything that is crossed is wrong.

In line 8 we can see that this “logWarning” function comes from “@microsoft/gulp-core-build”. And when I search around a bit in this module, I get the file “logging.js” in that module. In line 150 of that file, we can clearly see the bad guys.

What is the problem? This is a warning and not an error, but it gets outputted to console.error(). Instead, it should get output to console.warn().

Somehow it is a bit harder to test on an agent. But when I managed to log in to the agent and change that line of code before triggering the build (and without cleaning up the previously downloaded source code), then the warning still appears, but the task ends successfully.

So what is going really wrong? Still, when you set the flag “skipFeatureDeployment” mentioned above os set to false, then it works. When I set it to true then not. So it is still clearly the warning. But there is another flag in the command that we are doing:

gulp package-soltion --ship

when I am not using the “–ship” flag it still works even with the error. So this flag is actually letting a warning forwarded to the error output of the console. Why is that? We are still in the “@microsoft/gulp_core_build” module, but we have a look at the internal flag “shouldWarningsFailBuild”, which is defined in the “index.js” and is initialized to false. And inside of that module, it is never set to true. What is that flag doing? It is explained in the code documentation of the warn() function in logging.js.

/** 
 * Logs a warning. It will be logged to standard error and cause the build to fail 
 * if buildConfig.shouldWarningsFailBuild is true, otherwise it will be logged to standard output. 
 * @param message - the warning description 
 * @public 
*/ 
function warn() {

Furthermore, I tried to find the crossing point of the “shouldWarningsFailBuild” and the “–ship” flag, to see if what the “–ship” flag is doing to this one.

This happens actually in the module “@microsoft/sp-build-common” in the file “BuildRig.js”. There the “shouldWarningsFailBuild” is equal to the “ship” flat. This essentially makes warnings always fail the build.

// Note this overrides the getters for ship and production on args
 this.args.ship = this.args.production = (this.args.production || this.args.ship);
 // Since gulp-core-build doesn't recognize the --ship flag, ensure it gets the right state
 coreBuild.mergeConfig({
    production: this.args.ship,
    shouldWarningsFailBuild: this.args.ship
 });

That itself, not the problem. But, how do you want to use extensions that are activated automatically tenant wide, get a warning and when you want to ship it, the build fails on a build agent?

In my opinion, this is a bug

So, I will file an issue on GitHub later to have this fixed.
Update: I have created an issue on GitHub.

Waldek brought me to that idea if the warning disappears and the build is successful, it means that the warning is forwarded to stdError. This is clearly the case. This bug exists in the version 3.05 of the module “gulp-core-build”.

Idea for dynamic technical documentation

The third topic included an idea of mine. Administrators of SharePoint very much know SPDockIt, which can generate a nice documentation out of the infrastructure. SPCAF somehow is already checking artifacts in SharePoint solutions. So they already know dependencies and the knowledge which artifacts are actually included.

We in our company use MarkDown in the VSTS wiki to document the solution. We like to keep the documentation close to the code. So I asked: What about creating a Mark Down output for solution documentation of SP artifacts?

Like:

And much more. This would make the documentation much easier. And you can have a fresh documentation, that you generate with SPCAF on build time and save it to the VSTS Wiki, which is a repository anyway.

Not sure how much more work this is, but I bet it is much easier for SPCAF than starting from scratch.

He found this idea (or parts of it) interesting enough, that he got my name and we soon will talk about that a bit more. Let’s see if this makes sense or not when we go into detail. But from a developer point of view, it would make sense.

Talk with Erwin van Hunnen (MVP, Rencore)

Who doesn’t know him has probably never worked with PnP PowerShell very closely. Vesa always calls him “The Father of PnP PowerShell”. He was and is the main Person driving this project and also probably (not sure) came up with the idea. He is of course not the only contributor, but he is more or less leading the project.

The reason to approach him, because I want to start to contribute to the PnP projects. Currently, at Schindler, I am more taking care of VSTS and to develop and improve the tools for our needs. Since we use PnP Provisioning with XML and PowerShell a lot, I developed a VSTS task for this.

This task makes it much easier to deploy artifacts without having to use any PnP PowerShell because this task is taking care of.

Anyway, I talked to him to contribute. And of course, I can. And I will. There are several tasks that could be created in order to improve the ALM part for SharePoint deployments. There are not provisioning tasks or any app deployment tasks in the marketplace. I want to change that, so I start to contribute as soon as possible.

Talk with Andreas Krüger (Comparex)

The last talk I had was about PowerShell DSC and how it can be used with ALM. Since at Schindler out development is entirely living on Azure, we are very flexible. We do not use Dev Test Labs yet, but a colleague and me try to push into that direction. But what it would mean, is move the existing machines in Azure Dev Test Labs and also have a customized template for SharePoint to be used, when developer create their SharePoint Dev machine.

Migration of Dev Machines into Dev Test Labs

The migration is a harder part, but my main idea was, to use SharePoint Reverse DSC to do that. You could make a content DB backup and a configuration backup of current machines and then try to move them. Not sure if this is the most feasible idea.

But the DSC configuration could be also used as a starting point for the ARM templates that are used in the Dev Test Labs. With ARM Templates it is possible to use PowerShell DSC modules. And this is a way to go, especially for Operations persons.



Sebastian is an Azure Nerd with focus on DevOps and Azure DevOps (formerly VSTS) that converted from the big world of SharePoint and O365. He was working with O365 since 2013 and loved it ever since. As his focus shifted in 2017 to more DevOps related topics in the Microsoft Stack. He learned to love the possibilities of automation. Besides writing articles in his blog and German magazines, he is still contributing to the SharePoint Developer Community (and PnP SharePoint) to help to make the ALM part a smoother place to live in.

Comments 0
There are currently no comments.

This site uses Akismet to reduce spam. Learn how your comment data is processed.