A simple blog talking about many things using as few words as possible. Maybe it could help if you are in a hurry. You can find more about me on my website mlbors.com. Keep it stupidly simple!
Don't wanna be here? Send us removal request.
Text
What are JavaScript Proxies?
In this post, we are going to see what JavaScript Proxies are, how they work and how we can use them.
Introduction
Proxies were introduced in ES6 and are useful to define a custom behaviour for fundamental operations. In other words, a Proxy is an object that stands between an object and what we could call the outside world. It means that we can wrap an existing object and intercept any access to its attributes or its methods.
How it works?
Three components are important when we talk about Proxies:
Target: the object that will be wrapped (it can be any sort)
Traps: the methods that provide property access
Handler: the placeholder object that which contains Traps
As we said before, we can use a Proxy to define a custom behaviour whenever the properties or the methods of an object, the Target, are accessed. It allows us to provide a custom functionality to a basic operation that can be performed on an object. We achieve this by using Traps. The Handler object passes the Target and the requested element to the concerned Trap. A complete list of the various Traps can be found here.
A simple example
Let's start with a simple example. First, we define an object
const mario = { name: 'mario', profession: 'plumber' }
Now, let's do the following things:
console.log(mario.name) // output: mario console.log(mario.profession) // output: plumber console.log(mario.power) // output: undefined
As we can see, our object has only two properties and we want to access a third one that doesn't exist. Sadly, we receive "undefined" in that case. How can we return a default value instead when we try to access a property that doesn't exist? We can do this by using the GET Trap. We can now define a Handler and a Proxy:
const handler = { get: function(target, name) { return name in target ? target[name] : 'none' } } const p = new Proxy(mario, handler)
Let's try to access the properties of our object through the Proxy:
console.log(p.name) // output: mario console.log(p.profession) // output: plumber console.log(p.power) // output: none
When to use a Proxy?
We can imagine using a Proxy when we want to enforce value validation in JavaScript object. We can, for example, simply check if the value we want to set is correct or if the affected property can be modified.
We can also use a Proxy to revoke the access to an object or simplify a data structure querying process.
Conclusion
Through this brief article, we saw what JavaScript Proxies are. We saw that a Proxy is an object that is used to define custom behaviour for fundamental operations. There are three important terms when we talk about Proxies: Target, Traps and Handler.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
4 notes
·
View notes
Text
Some thoughts about SharePoint and Unit Testing
Through this post, we are going to try to have a reflection about SharePoint and Unit Testing.
Introduction
Let's be honest: here, we are not going to picture the perfect solution to write good unit tests easily when we use SharePoint as a development platform. Writing unit tests when we develop something for SharePoint can be really hard and discouraging. However, we are going to overview a few options to achieve this and to have a better conscience (or not).
Whatever development model we choose, we will face some problems and scratch our head to the bone. Some even say that SharePoint was not designed with testability in mind. However, let's see various ways we can explore.
Option 1: avoid Unit Testing
This option is pretty radical and simple. It depends on whether we can live with it our not.
Option 2: use third-party tools dedicated to SharePoint
There are several tools dedicated to SharePoint that can help us to achieve Unit Testing against SharePoint. However, good ones required to use our credit card and don't assure that everything will go smoothly.
Option 3: wrap SharePoint objects
When we develop using the .NET Framework and want to write our various tests, it is really common to use tools such as Moq to create fake objects to easily isolate what we want to test. Now, with SharePoint, our main problem is the code that depends on SharePoint. Using Moq to mock SharePoint will most of the time lead us to a dead end. SharePoint classes are often sealed and some objects cannot be instantiated without an HTTP Context. Maybe we will succeed to mock a few things, but the result won't be satisfying and will probably be messy.
One workaround to that problem is to wrap the values of the various SharePoint objects we need in classes or structs that we control and can easily mock. This will require to create extra classes, but, if we use these various wrappers, it can ease Unit Testing.
Option 4: create another layer
This option makes use of "Option 3" and leads us to create another layer between our code and SharePoint. It means that instead of using the various SharePoint APIs directly, we create one or more objects (Services, Proxies or whatever we want to call it) that we will work with. These objects will then work with the various SharePoint APIs, directly or through Repositories, it depends on how we want to implement this concept, and return wrapped objects.
With this solution, it means that we can concentrate usage of SharePoint objects in a restricted area and decouple our code from SharePoint APIs. So, it means in things like Event Receivers or code behind Control Templates, instead of using SharePoint classes we use our different Services. It makes our code more testable and avoid code duplication.
However, most of the methods exposed by SharePoint objects don't have a return value. So, if we have a Service that communicates with a Repository, how can we know that SharePoint failed or succeeded in achieving the requested operation? Well, sadly, we have to find workarounds. For example, when we add an SPItem to an SPList, we can count the number of items before and after the operation and check if the item exists in the updated collection and return a boolean value depending on the scenario. This leads to extra code and to extend the time of the operation, but we will have an answer.
Of course, this option could lead to over-engineering problems and there will always be a point where we will face SharePoint objects. We also have to take extra care of the SPContext, SPSite and SPWeb objects handling because it could raise sever exceptions if we do it without caution.
Option 5: create a console application
This is not really Unit Testing. However, we can imagine creating a small console application using C# or PowerShell that will check, after we deployed our package, if our various Features were installed and activated or if our Lists are in the right place. It involves the whole SharePoint installation and "real" data.
Conclusion
Through this article, we explored some options we have when we want to Unit Testing against SharePoint. We can see that Unit Testing, in such a case, is not really easy and can bring us pain and suffering, and maybe more than if we would do it with another platform or framework. However, we have a few possibilities than can make us more secured with our development. It is up to us to decide which solution is the best depending on what we want to achieve and to accept that some things can only best test by hand and to remember this good old "try...catch" thing is here for us.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
2 notes
·
View notes
Text
What are Sets?
In this small article, we are going to take a look at Sets to understand what they are and how they work.
Introduction
In computer science, a Set is a data structure. It can store unique values of the same type without any particular order. However, the stored objects have to be comparable.
A Set is an implementation of the mathematical concept of a finite set.
Here, we are going to build our own Set class using C# and .NET Framework to understand how it works. First, we are going to see the basic set up for our class. Then, we are going to examine the specific operations Union, Intersection, Difference and Symmetric Difference. Nevertheless, this Set class is not going to be a production quality data structure.
Set Class
Let's imagine the following code:
public class Set<T> : IEnumerable<T> where T: IComparable<T> { private readonly List<T> _items = new List<T>(); public Set() { } public Set(IEnumerable<T> items) { AddRange(items); } public void Add(T item) { if (Contains(item)) { return; } _items.Add(item); } public void AddRange(IEnumerable<T> items) { foreach (T item in items) { Add(item); } } public bool Remove(T item) { return _items.Remove(item); } public bool Contains(T item) { return _items.Contains(item); } public int Count { get { return _items.Count; } } public Set<T> Union(Set<T> otherSet) { Set<T> result = new Set<T>(_items); result.AddRange(otherSet._items); return result; } public Set<T> Intersection(Set<T> otherSet) { Set<T> result = new Set<T>(); foreach (T item in _items) { if (otherSet._items.Contains(item)) { result.Add(item); } } return result; } public Set<T> Difference(Set<T> otherSet) { Set<T> result = new Set<T>(_items); foreach (T item in otherSet._items) { result.Remove(item); } return result; } public Set<T> SymmetricDifference(Set<T> otherSet) { Set<T> intersection = Intersection(otherSet); Set<T> union = Union(otherSet); return union.Difference(intersection); } public IEnumerator<T> GetEnumerator() { return _items.GetEnumerator(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return _items.GetEnumerator(); } }
As we can see, we implemented our own Set class. Let's examine it in detail.
Constraints, members and Construct
As we can see, our class is a generic class. We also decided that our class has to implement the IEnumerable interface. Finally, we added a constraint to tell that "T" has to be a comparable object.
We chose to use the "List" data structure to store our values, but if we wanted, we could have done it with an array.
We declared two constructs, one without any arguments, and another one that lets us initialize our Set with a collection of values.
Basic operations
Let's see the basic operations we need to use our Set.
First, the Add method simply checks if the value we want to add to our Set is already in our list of items. If it is not, it adds this value to the list.
Remove and Contains do what they are supposed to do. Here, we just use the methods provided by the "List" class. We also do the same for the Count method.
Union
Union compares two Sets and returns a third Set that contains all of the unique elements in both Sets. So, for example the Union of {1,3,4,9} and {3,6,7} is {1,3,4,6,7,9}.
Intersection
Intersection compares two Sets and returns a third Set that contains all the values shared by both Sets. The Intersection of {1,2,7} and {2,4,7,9} is {2,7}.
Difference
Difference compares two Sets and returns a third Set that contains the values that are only in the first Set. So, the Difference of {1,3,5,6} and {2,3,6,7} is {1,5}.
Symmetric Difference
Symmetric Difference is the Difference of the Union and the Intersection. In other words, it compares two Sets and returns a third Set that contains values that are only in one Set. So the Symmetric Difference of {1,2,4,6,7} and {1,3,4,5,9} is {2,3,5,6,7}.
Conclusion
Through this small post, we saw what Sets are and how they work. We saw how we can implement our own Set class and which concepts lie behind this data structure.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
1 note
·
View note
Text
Initializing a project with Parcel
Through this small post, we are going to see how to initialize a project with Parcel.
Introduction
There are many tools that help us to build our application or our website. Some are really complex and sometimes tedious to set up just for a small project. This is where we can use Parcel.
Parcel is a bundler that requires almost zero configuration to achieve or goal. Our example is going to be pretty straightforward.
Installing
Before going any further, we need to install Node.js and npm. Then, we can install Parcel globally like so:
npm install -g parcel-bundler
Now, let's initialize our project:
npm init
We can now install a few dependencies:
npm install -save bootstrap jquery popper.js
And now, a few dev dependencies:
npm install --save-dev sass
Adding a few files
Let's create the following files:
src/ assets/ img/ foo-img.jpg scripts/ main.js styles/ main.scss index.html
We can now fill our different files like so:
require('bootstrap'); require('../img/*.*');
main.js
@import "../../../node_modules/bootstrap/scss/bootstrap.scss";
main.scss
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <link rel="stylesheet" href="assets/styles/style.scss"> </head> <body> <div class="container-fluid"> <div class="row"> <div class="col"> <h1>Using Parcel</h1> <p>Foo content!</p> </div> </div> </div> <script src="assets/scripts/main.js"></script> </body> </html>
index.html
Package.json
In our package.json file, we can now add the following lines:
"main": "src/index.html", "scripts": { "serve": "parcel src/index.html --out-dir dist", "watch": "parcel watch src/index.html --out-dir dist", "build": "parcel build src/index.html --out-dir dist" }
We can now run the following command
npm run serve
If everything is alright, our website will be available at http://localhost:1234.
Conclusion
Through this small article, we saw how we can use Parcel to set up quickly a simple project. As we saw, there is almost no configuration to do to achieve our goal.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
1 note
·
View note
Text
SharePoint Development Models
Through this article, we are going to have an overview of the different SharePoint development models. Let's get into it!
Introduction
Before building an application, we have a large amount things to think: needs, goals, architecture, infrastructure, frameworks and so on. Developing for SharePoint add an additional layer of complexity because we have to choose between various ways to work.
Each SharePoint development model has its purposes, advantages and difficulties. Here, we are going to have an overview of those different models.
Farm Solutions
Also known as Full Trust Solutions, they require to be developed on a SharePoint server and have access to full server-side SharePoint API. They are supported in SharePoint on-premise installations and have to be deployed by a Farm Administrator and the various features they can contain are then available to the entire farm.
Farms Solutions are distributed as wsp packages and can have different scope: Farm, Web Application, Site Collection or Website. They support things like Features, Event Receivers, Timer Jobs, WebParts, Modules and so on.
When we deploy a Farm Solution, we have to keep in mind that an IIS Reset will be performed.
Sandbox Solutions
Because Farm Solutions are very permissive, Microsoft introduced another kind of solution: Sandbox Solutions. Their scope is smaller because they can only target the Site Collection and have access to a small subset of the server-side API.
The wsp packages can be deployed to a Solutions Gallery and they don't force the server to reset and while a Farm Solution can bring down the whole farm, a Sandbox Solution has only impact on a Site Collection. They are very useful to deploy assets or Content Types and Lists.
Sandbox Solutions are now deprecated, but they can still be used.
Add-Ins
Also known as Apps, SharePoint Add-Ins are deployed in the App Catalog, in the form of an .app file, public or private, and provide a way to develop an application without any server-side code executing on the SharePoint server. This means that Add-Ins run either in the context of the client browser or on another server.
This model introduced with the concept of the Office Store and Cloud-related things in mind, provides a high level of isolation. Add-Ins require working with the Client Side Object Model (CSOM) or the REST API.
Microsoft claims that a Farm Solution can be converted into one or various Add-Ins. However, create things that could easily be done with a Farm Solution with an Add-In can be tricky.
SharePoint Add-Ins come in various flavors:
SharePoint-Hosted
Provider-Hosted
SharePoint-Hosted Add-Ins are installed on a SharePoint Website, called the Host Web while their resources are hosted on an isolated subsite called the App Web. They only support JavaScript, a few ASPX files and XML. SharePoint-Hosted Add-Ins can access data and resources that are outside of the App Web by using one of the following techniques to bypass the browser's same origin policy: a special JavaScript cross-domain library or a specific JavaScript WebProxy class.
Provider-Hosted Add-Ins include components that are deployed on another server while they are installed on the Host Web. It means that we are able to run server-side code on another server and to communicate with SharePoint using CSOM. They offer a great flexibility to develop the various elements we need.
If we can create WepParts with Farm Solutions, Add-Ins offer something similar called Add-In Parts, or Client WebPart. This concept is similar to WebPart, but it implies that the Add-In Part displays a webpage that we specify by using an IFrame in a page in the Host Web.
SharePoint Add-Ins are security principals that need to be authenticated and authorized and this can be done in various ways. An Add-In uses permission requests to ask for the permissions it needs. The permission requests specify the rights that the Add-In needs and the scope at which it needs the rights.
SharePoint Framework
Also known as SPFx, the SharePoint Framework is the most recent addition to the SharePoint developer toolbox. It provides full support for client-side development it grows with the development of SharePoint Online. It allows us to develop components using modern web technologies such as React. For now, the support of this framework is more advanced in SharePoint Online and it is only possible to develop WebParts and Extensions.
One advantage of this framework is that we don't need SharePoint to be installed on our machine to develop. We just have to download a few Node packages and to run our server using Gulp. When we compile what we developed, we also get an .app file.
Conclusion
Through this article, we saw the various existing ways to develop for and with SharePoint. We saw the main idea behind each model, what they have in common and how they differ. We saw that Solutions use server-side API and Add-Ins aim to execute in a client context. We also had a small overview of the SharePoint Framework.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
2 notes
·
View notes
Text
Deploying WebParts built with SPFx on SharePoint on-premise
Through this article, we are going to see how we can build a WebPart using the SharePoint Framework (SPFx) and deploy it on a SharePoint 2016 on-premise installation.
Introduction
The SharePoint Framework, aka SPFx, is a page and extension model that allows us to develop front-end apps using client-side code. Here, we are going to see how we can quickly build a WebPart and deploy it on a SharePint 2016 on-premise environment.
Prerequisites
Before going any further, we have to check if the "September 2017 Public Update for SharePoint 2016" is installed on our server, otherwise, we can't use SPFx. The "Configuration database" version must be equal or greater than 16.0.4588.1000. We also need to have our own custom App Catalog.
We also need to have Node and npm installed on our machine. Depending on the Node version that we use, we may encounter an error like "ERR_SSL_PROTOCOL_ERROR". In such a case, setting the environment variable "NODE_NO_HTTP2" to "1" or disabling the "https" parameter in the "serve.json" file that we will get later, can fix the problem.
Setting up our project
First we need to install Gulp and Yeoman globally like so:
npm install -g yo gulp
We are then going to initialize our project like so:
npm init npm install @microsoft/generator-sharepoint --save-dev
Here, we chose to install the "Yeoman SharePoint generator" locally and not globally because it offers us the ability to switch between different projects using different versions of the "Yeoman SharePoint generator". We can now run the generator like so:
yo @microsoft/sharepoint
A few questions will be prompted. We have to specify that we want to use the SharePoint 2016 version and we want to create a WebPart. For the sake of our example, we also need to select the "No JavaScript framework" option.
When the installation is done, we have to install the Developer certificate like so:
gulp trust-dev-cert
Now, if we run the following command, a series of Gulp tasks will be executed and our browser will launch, so we can preview our WebPart in our local dev environment:
gulp serve
Deploying
Our WebPart is just a simple "HelloWorld" WebPart, but it is enough for the exercise. Here, we are not going to see how to use SharePoint REST APIs or how we can improve our development.
So, it is now time to deploy our WebPart. First, let's head to the "config/write-manifests.json" file and let's edit it like so:
{ "$schema": "https://dev.office.com/json-schemas/spfx-build/write-manifests.schema.json", "cdnBasePath": "https://tenant-name.com/accessible/folder" }
Here, we specify where we host our files. It could be SharePoint, Azure or whatever we want.
We can now generate the files that we want to deploy:
gulp bundle -–ship
If everything is alright, we now have two folders: "dist", where unminified bundles are, and "temp/deploy", where optimized bundles are placed.
Now we can generate the "sppkg" package like so:
gulp package-solution -–ship
To finish the process, we have to upload the files placed in "temp/deploy" to the place we specified in our "config/write-manifests.json" file. Then, we can upload our package to the App Catalog. When it is done, we can simply add our app to a site like any other SharePoint app.
Conclusion
Through this post, we saw how we can quickly start a project using SPFx and how we can deploy our app when we have an on-premise environment. However, we did not see the development part and of course, we could go further by setting up a process where our application is built, packaged and deployed automatically (but this is another subject).
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
2 notes
·
View notes
Text
Host-Named Site Collections, Managed Paths and App Catalog with SharePoint 2016
In this article, we are going to set up a Host-Name Site Collection, Managed Paths and create a private App Catalog using SharePoint 2016 on-premise.
Introduction
Host-Named Site Collections enable us to assign a unique DNS name to Site Collections. So, it means that we can deploy many sites with unique DNS names in the same Web Application and it allows us to scale our environment to many customers. In other words, we can have something like http://sitea.domain.com and http://siteb.domain.com.
For the sake of our example, let's imagine that we need to set a development environment with multiple Collections, each for a different purpose, and a private App Catalog for our Add-Ins.
Here, we are not going to see how to configure the domain names in DNS and everything that is related to this part. So, of course, we have to check with the system administrator what could be done.
Creating the HNSC
Because Host-Named Site Collections can only be created with PowerShell and not from the Central Administration, we are going to build a PowerShell script that will create what we need: an Application Pool, a Web Application, a Root Site Collection and our Host-Named Site Collection.
First, we need the Application Pool:
New-SPServiceApplicationPool -Name $applicationPool -Account $managedAccount
We can now set our Web Application and create the required binding for IIS:
New-SPWebApplication -Name $webAppName -hostHeader -$webAppHostHeader -port $port -Url $webAppUrl -ApplicationPool $applicationPool -ApplicationPoolAccount (Get-SPManagedAccount $managedAccount) -AuthenticationProvider (New-SPAuthenticationProvider -UseWindowsIntegratedAuthentication) -DatabaseName $dataBase New-WebBinding -Name $webAppName -IPAddress "*" -Port $port -Protocol http
Then, we can create the Root Site Collection:
New-SPSite -Url $rootCollectionUrl -HostHeaderWebApplication (Get-SPWebApplication $webAppName) -Name $rootCollectionName -Description $rootCollectionDescription -OwnerAlias $ownerAlias
Now, let's set up our Host-Named Site Collection:
New-SPSite -Url $collectionUrl -HostHeaderWebApplication (Get-SPWebApplication $webAppName) -Name $collectionName -Description $collectionDescription -OwnerAlias $ownerAlias -language $language -Template $template
Finally, we can create a Collection using a Managed Path:
New-SPManagedPath -RelativeURL $managedPath -HostHeader -Explicit $url = $collectionUrl + "/" + $managedPath New-SPSite -Url $url -HostHeaderWebApplication $collectionUrl -Name $managedPathCollectionName -Description $managedPathCollectionDescription -OwnerAlias $ownerAlias -language $language -Template $template
Our PowerShell script could look something like this:
#******************# #***** PARAMS *****# #******************# Param( [string] $webAppName, [string] $webAppUrl, [string] $webAppHostHeader, [string] $applicationPool, [int] $port = 80, [string] $managedAccount, [string] $dataBase, [string] $rootCollectionUrl, [string] $rootCollectionName, [string] $rootCollectionDescription, [string] $collectionUrl, [string] $collectionName, [string] $collectionDescription, [string] $ownerAlias, [int] $language = 1033, [string] $template = "STS#0", [bool] $createAppPool = $true, [bool] $createWebApp = $true, [bool] $createRootCollection = $true, [bool] $createHostNameCollection = $true, [bool] $createManagedPathCollection = $true, [string] $managedPath, [string] $managedPathCollectionName, [string] $managedPathCollectionDescription ) #****************************************# #****************************************# #********************# #***** INCLUDES *****# #********************# Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue #****************************************# #****************************************# #***********************************# #***** CREATE APPLICATION POOL *****# #***********************************# function CreateApplicationPool() { Write-Host "...creating application pool" New-SPServiceApplicationPool -Name $applicationPool -Account $managedAccount Write-Host "Application Pool created." -ForegroundColor Green Write-Host "" } #****************************************# #****************************************# #**********************************# #***** CREATE WEB APPLICATION *****# #**********************************# function CreateWebApplication() { Write-Host "...creating web application" New-SPWebApplication -Name $webAppName -hostHeader -$webAppHostHeader -port $port -Url $webAppUrl -ApplicationPool $applicationPool -ApplicationPoolAccount (Get-SPManagedAccount $managedAccount) -AuthenticationProvider (New-SPAuthenticationProvider -UseWindowsIntegratedAuthentication) -DatabaseName $dataBase New-WebBinding -Name $webAppName -IPAddress "*" -Port $port -Protocol http Write-Host "Web Application created." -ForegroundColor Green Write-Host "" } #****************************************# #****************************************# #**********************************# #***** CREATE ROOT COLLECTION *****# #**********************************# function CreateRootCollection() { Write-Host "...creating root collection" New-SPSite -Url $rootCollectionUrl -HostHeaderWebApplication (Get-SPWebApplication $webAppName) -Name $rootCollectionName -Description $rootCollectionDescription -OwnerAlias $ownerAlias Write-Host "Root Collection created." -ForegroundColor Green Write-Host "" } #****************************************# #****************************************# #****************************************# #***** CREATE HOST-NAMED COLLECTION *****# #****************************************# function CreateHostNamedCollection() { Write-Host "...creating host-named collection" New-SPSite -Url $collectionUrl -HostHeaderWebApplication (Get-SPWebApplication $webAppName) -Name $collectionName -Description $collectionDescription -OwnerAlias $ownerAlias -language $language -Template $template Write-Host "Host-Named Collection created." -ForegroundColor Green Write-Host "" } #****************************************# #****************************************# #******************************************# #***** CREATE MANAGED PATH COLLECTION *****# #******************************************# function CreateManagedPathCollection() { Write-Host "...creating managed path collection" New-SPManagedPath -RelativeURL $managedPath -HostHeader -Explicit Write-Host "Managed Path added." -ForegroundColor Green Write-Host "" $url = $collectionUrl + "/" + $managedPath New-SPSite -Url $url -HostHeaderWebApplication $collectionUrl -Name $managedPathCollectionName -Description $managedPathCollectionDescription -OwnerAlias $ownerAlias -language $language -Template $template Write-Host "Managed Path Collection created." -ForegroundColor Green Write-Host "" } #****************************************# #****************************************# #****************# #***** MAIN *****# #****************# function Main() { Write-Host "****************************************" Write-Host "***** CREATE HOST NAMED COLLECTION *****" Write-Host "****************************************" Write-Host " " Write-Host "***** START *****" -ForegroundColor Green Write-Host " " if ($createAppPool) { CreateApplicationPool } if ($createWebApp) { CreateWebApplication } if ($createRootCollection) { CreateRootCollection } if ($createHostNameCollection) { CreateHostNamedCollection } if ($createManagedPathCollection) { CreateManagedPathCollection } Write-Host " " Write-Host "***** END *****" -ForegroundColor Green Write-Host " " Read-Host -Prompt "Press ENTER to continue" exit } #****************************************# #****************************************# #******************# #***** SCRIPT *****# #******************# Main
We can now use our script like so:
.\create-host-named-collection.ps1 -webAppName "sp16dev1.com" -webAppUrl "http://sp16dev1.com" -webAppHostHeader "sp16dev1.com" -applicationPool "SharePoint - sp16dev1.com" -port 80 -managedAccount Domain\serviceAccount -dataBase WSS_Content_DevSP -rootCollectionUrl "http://rootsp16dev.com" -rootCollectionName "rootsp16dev" -rootCollectionDescription "Root collection" -collectionUrl "http://sp16dev.com" -collectionName "sp16dev" -collectionDescription "Main collection for development" -ownerAlias [email protected] -language 1033 -managedPath "addins-dev" -managedPathCollectionName "sp16dev-addins" -managedPathCollectionDescription "Collection for add-ins"
We could probably clean up a bit the number of arguments and add security checks to our script, but for now, it will be fine. If everything is alright, we should see our Wep Application and our Collections in the Central Administration.
Creating the App Catalog
We are now going to set up our App Catalog. First, we need to go in the Central Administration, then "System Settings", "Manage services in this farm". We have to click on "Enable Auto Provision" for "Microsoft SharePoint Foundation Subscription Settings Service".
Next, we have to create the "Subscription Settings" service application and proxy:
$SubscriptionSvcApp = New-SPSubscriptionSettingsServiceApplication -ApplicationPool 'SharePoint Web Services Default' -Name 'Subscriptions Settings Service Application' -DatabaseName 'Subscription' $SubscriptionSvcProxy = New-SPSubscriptionSettingsServiceApplicationProxy -ServiceApplication $SubscriptionSvcApp
We may also need to create a "App Management Service Application". It can be done under "Manage Service Applications". We have to click on "New" then "App Management Service". We can choose "SharePoint Web Services Default" for the "Application Pool".
We now have to head to the "Apps" page, click on the "Configure App URLs" link. In the "App domain" field, we have to enter the domain that we chose to host our apps and in the "App prefix" field, we need to specify which prefix we want to use. So, at the end of the day, we should have a URL for an app would be something like so "app-12345678ABCDEF.apps.sp16dev.com".
Using PowerShell, we now have to configure our app URLs for our tenant:
Set-SPAppDomain apps.sp16dev.com Set-SPAppSiteSubscriptionName -Name "app" -Confirm:$false
Now, let's head back to the Central Administration, then "Application Management", "Manage Web applications" and select our web application. On the ribbon, let's click on "Manage Features" and activate the "Apps that require accessible internet facing endpoints" feature.
We can now also enable sideloading on our dev site if needed:
Enable-SPFeature -identity "EnableAppSideLoading" -URL http://sp16dev.com/addins-dev
Finally, we can create our app catalog:
New-SPSite -Url "http://sp16dev.com/apps" -HostHeaderWebApplication "http://sp16dev.com" -Name "apps" -Description "App Catalog" -OwnerAlias [email protected] -language 1033 -Template "APPCATALOG#0"
If everything is alright, we can navigate to "http://sp16dev.com/apps" where we can upload our well-crafted Add-Ins and make them accessible in our different sites through the App Catalog.
Side note
We may have trouble with our catalog. Maybe this last one will tell us that there is nothing from our organization even though there are deployed apps. This post from Microsoft could be the answer to this problem.
Conclusion
Through this post, we saw what Host-Named Site Collections are and how we can set up our environment by using them. We also took a look at how can create a private App Catalog.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
1 note
·
View note
Text
A small introduction to Unity Container
Through this post, we are going to take a look at how we can achieve Dependency Injection with Unity (the IoC Container, not the Game Engine). Let's get into it!
Introduction
Dependency Injection is one way to implement Inversion of Control. Dependency Injection, also called DI, is a design pattern in which one or more dependencies (services) are injected into a dependent object (client). This pattern allows us to implement a loosely coupled architecture by separating the creation of a client's dependencies from its own behavior. We can use this pattern when we want to remove knowledge of concrete implementations from objects, but also when we want to get a better testable code in isolation using mock objects.
Here, we are going to see how we can achieve Dependency Injection using Unity, the IoC Container, not the Game Engine. By way, one way to apply Dependency Injection through our code when we work with Unity3D (here, the Game Engine), is to use Zenject.
Installing Unity
First, we need to install Unity. This can be done through the NuGet Packets Manager.
PM> Install-Package Unity
Creating a few interfaces
First, let's define a few interfaces:
public interface IGame { void Play(); } public interface IInitializer { }
We can now create some concrete implementations for these two interfaces:
public class PlatformGame : IGame { public void Play() { Console.WriteLine("Playing a platform game"); } } public class RPGGame : IGame { public void Play() { Console.WriteLine("Playing an RPG game"); } } public class Initializer : IInitializer { public Initializer(IGame game) { game.Play(); } }
Using Unity
Now, let's wrap Unity with another class. The classes that will use DI will then do so through what we can call a Facade. Our wrapper could look like so:
public class DIWrapper { /*********************/ /***** ATTRIBUTS *****/ /*********************/ /// <param name="_container">_Unity_ Container</param> protected static IUnityContainer _container; /**************************************************/ /**************************************************/ /***********************************/ /***** CONTAINER GETTER/SETTER *****/ /***********************************/ public static IUnityContainer Container { get { return _container; } set { _container = value; } } /**************************************************/ /**************************************************/ /*********************/ /***** CONSTRUCT *****/ /*********************/ static DIWrapper() { if (_container == null) { Container = new UnityContainer(); } } /**************************************************/ /**************************************************/ /*******************/ /***** RESOLVE *****/ /*******************/ /// <typeparam name="T">Type of object to return</typeparam> /// <summary> /// Resolves the type parameter T to an instance of the appropriate type. /// </summary> public static T Resolve<T>() { T result = default(T); if (Container.IsRegistered(typeof(T))) { result = Container.Resolve<T>(); } return result; } /**************************************************/ /**************************************************/ /*******************/ /***** RESOLVE *****/ /*******************/ /// <typeparam name="T">Type of object to return</typeparam> /// <param name="name">Object to resolve</param> /// <summary> /// Resolves the type parameter T to an instance of the appropriate type. /// </summary> public static T Resolve<T>(string name) { T result = default(T); if (Container.IsRegistered<T>(name)) { return Container.Resolve<T>(name); } return result; } /**************************************************/ /**************************************************/ /********************/ /***** REGISTER *****/ /********************/ /// <typeparam name="T">Type of object to return</typeparam> /// <param name="name">Object to resolve</param> /// <summary> /// Register the type parameter T. /// </summary> public static void Register<T>() { if (!Container.IsRegistered<T>()) { Container.RegisterType<T>(); } } /**************************************************/ /**************************************************/ /********************/ /***** REGISTER *****/ /********************/ /// <typeparam name="TFrom">Type of object to return</typeparam> /// <summary> /// Register the type parameter T. /// </summary> public static void Register<TFrom, TTo>() where TTo : TFrom { if (!Container.IsRegistered<TFrom>()) { Container.RegisterType<TFrom, TTo>(); } } /**************************************************/ /**************************************************/ /********************/ /***** REGISTER *****/ /********************/ /// <typeparam name="TFrom">Type of object to return</typeparam> /// <param name="name">Object to resolve</param> /// <summary> /// Register the type parameter T. /// </summary> public static void Register<TFrom, TTo>(string name) where TTo : TFrom { if (!Container.IsRegistered<TFrom>(name)) { Container.RegisterType<TFrom, TTo>(name); } } }
Now, for example, in our "Program.cs", inside our "Main" method, we can imagine to have the follwing lines:
static void Main(string[] args) { DIWrapper.Register<IGame, PlatformGame>(); DIWrapper.Register<IInitializer, Initializer>(); IInitializer initializer = DIWrapper.Resolve<IInitializer>(); }
The first line means that we map the interface "IGame" to the concrete class "PlatformGame". So, each time we want to resolve an object of type "IGame", we will get an object from the "PlatformGame" class. We then do the samine with the "IInitializer" interface and then ask Unity to resolve it. During the resolution, Unity will know that IGame is mapped to "PlatformGame" and will inject it into the "Initializer" object, through the consructor we previously defined.
A little futher with Factories
Now, let's imagine the following things:
public interface IDIFactory<T> { T Create(params object[] constructorArguments); } public interface IGameFactory<T> : IDIFactory<T> { GameType Type { get; set; } } public enum GameType { Platform, RPG }
Now, let's imagine the following implementation:
public class GameFactory<IGame> : IGameFactory<IGame> { /*********************/ /***** ATTRIBUTS *****/ /*********************/ /// <param name="_type">Type of object to create</param> protected GameType _type; /**************************************************/ /**************************************************/ /******************************/ /***** TYPE GETTER/SETTER *****/ /******************************/ public GameType Type { get { return _type; } set { _type = value; } } /**************************************************/ /**************************************************/ /*********************/ /***** CONSTRUCT *****/ /*********************/ public GameFactory() { Type = GameType.Platform; } /**************************************************/ /**************************************************/ /******************/ /***** CREATE *****/ /******************/ public IGame Create(params object[] constructorArguments) { IGame game; switch(_type) { case GameType.Platform: DIWrapper.Register<CURRENT.NAMESPACE.IGame, PlatformGame>("Platform"); game = DIWrapper.Resolve<IGame>("Platform"); break; case GameType.RPG: DIWrapper.Register<CURRENT.NAMESPACE.IGame, RPGGame>("RPG"); game = DIWrapper.Resolve<IGame>("RPG"); break; default: DIWrapper.Register<CURRENT.NAMESPACE.IGame, PlatformGame>("Platform"); game = DIWrapper.Resolve<IGame>("Platform"); break; } return game; } }
Here, we create a Factory that registers the "IGame" interface with multiple objects and gives us the right object depending of what we ask.
Let's imagine that we have the following code in our "Main" method:
static void Main(string[] args) { DIWrapper.Register<IGameFactory<IGame>, GameFactory<IGame>>(); DIWrapper.Register<IInitializer, Initializer>(); IInitializer initializer = DIWrapper.Resolve<IInitializer>(); }
We can then have the follwing code:
public class Initializer : IInitializer { public Initializer(IGameFactory<IGame> gameFactory) { gameFactory.Type = GameType.RPG; IGame game1 = gameFactory.Create(); game1.Play(); gameFactory.Type = GameType.Platform; IGame game2 = gameFactory.Create(); game2.Play(); } }
With unit tests
Unity will help us to write our unit tests by realizing mock objects. Here, we also use the Moq package.
[TestClass] public class InitializerTest { /*********************/ /***** ATTRIBUTS *****/ /*********************/ /// <param name="_instance">IInitializer object</param> /// <param name="_container">_Unity_ Container</param> private IInitializer _instance; private IUnityContainer _container; /**************************************************/ /**************************************************/ /****************************/ /***** INITIALIZE TESTS *****/ /****************************/ [TestInitialize] public void InitializeTests() { _container = new UnityContainer(); var gameFactoryMock = new Mock<IGameFactory<IGame>>(); _container.RegisterInstance<IGameFactory<IGame>>(gameFactoryMock.Object); _container.RegisterType<IInitializer, Initializer>(); _instance = _container.Resolve<IInitializer>(); } /**************************************************/ /**************************************************/ /**************************/ /***** CHECK INSTANCE *****/ /**************************/ [TestMethod] public void CheckInstance() { Assert.IsNotNull(_instance, "Instance of object Initializer should not be null."); } }
Conclusion
Through this article, we saw how we can easily achieve Dependency Injection with Unity. We saw that we can use various methods to apply this pattern and how we can wrap our container. We also saw how it can help us through our unit tests.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes
Text
Creating lists with SharePoint CSOM
In this post, we are going to see how we can work with SharePoint lists using CSOM.
Introduction
Since SharePoint 2010, SharePoint provides us a way to interact with SharePoint sites called Client Object Model, or CSOM. So we are able to write scripts or Add-Ins without the need to program directly on the server where SharePoint is installed.
Here, we are going to create a small program to see how we can use SharePoint CSOM to work with lists. For the sake of our example, we state that we are using a SharePoint on-premise installation.
Time to code
First, let's create a method to connect to SharePoint using CSOM:
protected void _ConnectSPCSOM(string url, string user, SecureString password) { using (ClientContext context = new ClientContext(url)) { context.AuthenticationMode = ClientAuthenticationMode.Default; context.Credentials = new System.Net.NetworkCredential(user, password); try { Web web = context.Web; context.Load(web); context.ExecuteQuery(); _CreateList(context, list); } catch (Exception exception) { Console.WriteLine(exception.Message); } } }
Now, let's create another method to handle the creation of our list:
protected void _CreateList(ClientContext context, string list) { if (!_CheckIfListExists(context, list)) { _GenerateList(context); } _AddFields(context); _AddData(context); }
Here, we call four other methods: one to check if the list already exists, another to generate it if necessary, one to add fields to the list and another one to add the data into the list.
The method that checks if the list already exists could look like so:
protected bool _CheckIfListExists(ClientContext context, string listName) { ListCollection listCollection = context.Web.Lists; context.Load(listCollection, lists => lists.Include(list => list.Title).Where(list => list.Title == listName)); context.ExecuteQuery(); if (listCollection.Count == 0) { return false; } return true; }
We can now generate the list like so:
protected void _CreateList(ClientContext context, string list) { Web web = context.Web; ListCreationInformation listCreationInformation = new ListCreationInformation(); listCreationInformation.Title = list; listCreationInformation.TemplateType = (int)ListTemplateType.NoListTemplate; List newList = web.Lists.Add(listCreationInformation); context.ExecuteQuery(); }
We can have another version of our method. In fact, sometimes, for example, in the case of a site using the "Team-Site" template, we can encounter an error saying that the list template is invalid. In such a case, we have to make a little variation:
protected void _CreateList(ClientContext context, string list) { Web web = context.Web; ListTemplate listTemplate = web.ListTemplates.GetByName("My template name"); context.Load(listTemplate); context.ExecuteQuery(); ListCreationInformation listCreationInformation = new ListCreationInformation(); listCreationInformation.Title = list; listCreationInformation.TemplateType = listTemplate.ListTemplateTypeKind; List newList = web.Lists.Add(listCreationInformation); context.ExecuteQuery(); }
Now, before we add fields to our list, let's imagine a simple Struct that will help us to handle them:
public struct ListField { public string Name { get; set; } public string DisplayName { get; set; } public string Type { get; set; } public string Extra { get; set; } public string ChoicesString { get; set; } public bool DisplayMainView { get; set; } public AddFieldOptions DefaultValue { get; set; } }
We can now add our fields like so and check if they already exist:
protected void _AddFields(ClientContext context, string listName) { Web web = context.Web; List list = web.Lists.GetByTitle(listName); context.Load(list); context.ExecuteQuery(); List<ListField> fieldsList = new List<ListField>(); fieldsList.Add(new ListField() { Name = "Field1", DisplayName = "Field1", Type = "Text", Extra = "", ChoicesString = "", DisplayMainView = true, DefaultValue = AddFieldOptions.DefaultValue }); fieldsList.Add(new ListField() { Name = "Field2", DisplayName = "Field2", Type = "Text", Extra = "", ChoicesString = "", DisplayMainView = true, DefaultValue = AddFieldOptions.DefaultValue }); fieldsList.Add(new ListField() { Name = "Field3", DisplayName = "Field3", Type = "Choice", Extra = "Format='Dropdown' FillInChoice='false'", ChoicesString = "<Default>None</Default><CHOICES><CHOICE>None</CHOICE><CHOICE>Choice 1</CHOICE><CHOICE>Choice 2</CHOICE><CHOICE>Choice 3</CHOICE><CHOICE>Choice 4</CHOICE></CHOICES>", DisplayMainView = true, DefaultValue = AddFieldOptions.DefaultValue }); foreach (ListField field in fieldsList) { if (_CheckIfFieldExistsInList(field.Name, list)) { continue; } string str; if (field.Type == "Choice") { str = "<Field Type='" + field.Type + "' DisplayName='" + field.DisplayName + "' Name='" + field.Name + "' " + field.Extra + " >"; str += field.ChoicesString; str += "</Field>"; } else { str = "<Field Type='" + field.Type + "' DisplayName='" + field.DisplayName + "' Name='" + field.Name + "'/>"; } Field addedField = list.Fields.AddFieldAsXml(str, field.DisplayMainView, field.DefaultValue); } context.ExecuteQuery(); } protected bool _CheckIfFieldExistsInList(string fieldName, List list) { foreach (Field field in list.Fields) { if (field.Title.Equals(fieldName)) { return true; } } return false; }
Now it is time to add some data to our list. First, for the sake of our example, let's imagine another really simple Struct that represents that data we want to add. Of course, this will change depending on our situation. Here, it is just for the example.
public struct WeData { public string Title { get; set; } public string Subtitle { get; set; } public string Url { get; set; } public string Type { get; set; } }
We can now write our method to add or data:
protected void _AddData(ClientContext context) { foreach(WebData item in _data) { ListItem listItem = _CheckIfItemExists(context, item); if (listItem == null) { _InsertItem(context, item); continue; } _UpdateItem(context, item, listItem); } }
Here, as we can see, we suppose that our data is stocked in a variable named "data" which is a list("List"). Here, we check if each item of this list already exists in our SharePoint list.
protected ListItem _CheckIfItemExists(ClientContext context, WebData item) { Web web = context.Web; List list = web.Lists.GetByTitle(_targetList); CamlQuery query = new CamlQuery { ViewXml = "<View><Query><Where><Eq><FieldRef Name='Field1' /><Value Type='Text'>" + item.Title + "</Value></Eq></Where></Query></View>" }; ListItemCollection items = list.GetItems(query); context.Load(items); context.ExecuteQuery(); if (items.Count == 0) { return null; } return items[0]; }
To query our item and check its existence, we use a CAML Query. This check helps us to decide if we have to insert a new item in our SharePoint list or if we have to update an existing one.
protected void _InsertItem(ClientContext context, WebData item) { Web web = context.Web; List list = web.Lists.GetByTitle(_targetList); ListItemCreationInformation itemCreateInfo = new ListItemCreationInformation(); ListItem listItem = list.AddItem(itemCreateInfo); context.ExecuteQuery(); _SetDataItem(context, item, listItem); } protected void _UpdateItem(ClientContext context, WebData item, ListItem listItem) { Web web = context.Web; _SetDataItem(context, item, listItem); }
Finally, we can update our SharePoint list like so:
protected void _SetDataItem(ClientContext context, WebData item, ListItem listItem) { listItem["Title"] = item.Title; listItem["Field1"] = item.Subtitle; listItem["Field2"] = item.Url; listItem["Field3"] = item.Type; listItem.Update(); context.ExecuteQuery(); }
Conclusion
Through this post, we saw how we can work with SharePoint lists using CSOM. We had an overview of the different operations we can achieve using the API.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes
Text
Uploading Add-Ins to SharePoint App Catalog using PowerShell
In this small article, we are going to see how we can upload a SharePoint Add-In to the App Catalog using PowerShell. Let's get into it!
Introduction
Of course, we can upload our precious and well-developed SharePoint Add-Ins manually to the App Catalog. Unfortunately, this process can sometimes be tedious depending on our workflow. So here, we are going to craft a PowerShell script to ease this process. For our exercise, we are targeting a SharePoint on-premise installation.
The script
Depending on where we want to use our script, we are going to handle two cases: directly on the server and remotely. So, our script is going to use both SSOM and CSOM. The choice will be made by passing an argument to the script. We are also going to let the choice to force Windows Authentication because this can be useful in case of an environment using ADFS.
#******************# #***** PARAMS *****# #******************# Param( [string] $filePath, [string] $appList, [string] $siteUrl, [string] $username, [string] $password, [bool] $forceAuth = $true, [bool] $forceWindowsAuth = $false ) #****************************************# #****************************************# #********************# #***** INCLUDES *****# #********************# [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Client") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Client.Runtime") | Out-Null Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue #****************************************# #****************************************# #*************************************# #***** MIXED AUTH REQUEST METHOD *****# #*************************************# function MixedAuthRequestMethod() { param([Parameter(Mandatory=$true)][object]$clientContext) Add-Type -TypeDefinition @" using System; using Microsoft.SharePoint.Client; namespace SPCSOM.SPOHelpers { public static class ClientContextHelper { public static void AddRequestHandler(ClientContext context) { context.ExecutingWebRequest += new EventHandler<WebRequestEventArgs>(RequestHandler); } private static void RequestHandler(object sender, WebRequestEventArgs e) { e.WebRequestExecutor.RequestHeaders.Remove("X-FORMS_BASED_AUTH_ACCEPTED"); e.WebRequestExecutor.RequestHeaders.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f"); } } } "@ -ReferencedAssemblies "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.dll", "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"; [SPCSOM.SPOHelpers.ClientContextHelper]::AddRequestHandler($clientContext); } #****************************************# #****************************************# #*************************# #***** START PROCESS *****# #*************************# function StartProcess() { Write-Host "*********************************" Write-Host "***** UPLOAD APP TO CATALOG *****" Write-Host "*********************************" Write-Host " " if ($filePath -eq $null -Or $appList -eq $null -Or $siteUrl -eq $null -Or $username -eq $null -Or $password -eq $null) { Write-Host "Missing value. Script will end." -ForegroundColor Red EndProcess } Write-Host "***** START *****" -ForegroundColor Green Write-Host " " } #****************************************# #****************************************# #******************************# #***** UPLOAD APPLICATION *****# #******************************# function UploadApplication() { $ctx $site Try { Write-Host "...application will be uploaded" if ($forceAuth) { $ctx = WithAuth $list = GetList $ctx $null $upload = GenerateUpload UploadFile $ctx $null $list $upload Write-Host "Application uploaded!" -ForegroundColor Green return } $site = WithoutAuth $list = GetList $null $site $upload = GenerateUpload UploadFile $null $site $list $upload Write-Host "Application uploaded!" -ForegroundColor Green return } Catch { Write-Host $_.Exception.Message -ForegroundColor Yellow Write-Host "An error occurred. Script will end." -ForegroundColor Red EndProcess $ctx $site } } #****************************************# #****************************************# #************************# #***** WITHOUT AUTH *****# #************************# function WithAuth() { Try { Write-Host "...connecting to server" $secpw = ConvertTo-SecureString $password -AsPlainText -Force $ctx = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl) $ctx.AuthenticationMode = [Microsoft.SharePoint.Client.ClientAuthenticationMode]::Default $credentials = New-Object System.Net.NetworkCredential($username, $secpw) $ctx.Credentials = $credentials.$credentials if ($forceWindowsAuth) { MixedAuthRequestMethod $ctx; } if (!$ctx.ServerObjectIsNull.Value) { return $ctx; } else { Write-Host "Server object is null. Script will end." -ForegroundColor Red EndProcess } } Catch { Write-Host $_.Exception.Message -ForegroundColor Yellow Write-Host "An error occurred. Script will end." -ForegroundColor Red EndProcess } } #****************************************# #****************************************# #************************# #***** WITHOUT AUTH *****# #************************# function WithoutAuth() { Write-Host "...getting site" return Get-SPWeb $siteUrl } #****************************************# #****************************************# #********************# #***** GET LIST *****# #********************# function GetList($ctx, $site) { Write-Host "...getting list" if ($forceAuth) { $list = $ctx.Web.Lists.GetByTitle($appList) $ctx.Load($list) $ctx.ExecuteQuery() return $list } return $site.Lists[$appList] } #****************************************# #****************************************# #***************************# #***** GENERATE UPLOAD *****# #***************************# function GenerateUpload() { Write-Host "...generating upload" $file = Get-ChildItem $filePath; if ($forceAuth) { $fileName = $filePath.Substring($filePath.LastIndexOf("\") + 1) $fileStream = New-Object IO.FileStream($file, [System.IO.FileMode]::Open) $fileCreationInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation $fileCreationInfo.Overwrite = $true $fileCreationInfo.ContentStream = $fileStream $fileCreationInfo.URL = $fileName return $fileCreationInfo } return Get-ChildItem $filePath; } #****************************************# #****************************************# #***********************# #***** UPLOAD FILE *****# #***********************# function UploadFile($ctx, $site, $list, $upload) { Write-Host "...uploading file" $fileName = $filePath.Substring($filePath.LastIndexOf("\") + 1) if ($forceAuth) { $file = $list.RootFolder.Files.Add($upload) $ctx.ExecuteQuery() $ctx.Dispose() return } $file = $list.RootFolder.Files.Add($fileName, $upload.OpenRead(), $true) $site.Dispose() return } #****************************************# #****************************************# #***********************# #***** END PROCESS *****# #***********************# function EndProcess($ctx, $site) { if ($ctx) { $ctx.Dispose() } if ($site) { $site.Dipose() } Write-Host " " Write-Host "***** END *****" -ForegroundColor Green Write-Host " " exit } #****************************************# #****************************************# #****************# #***** MAIN *****# #****************# function Main() { StartProcess UploadApplication EndProcess } #****************************************# #****************************************# #******************# #***** SCRIPT *****# #******************# Main
PowerShell script
As we can see, there is no real magic trick in this script. Used correctly, it will let us upload our ".app" file directly from our server or remotely. So, we can imagine using it in an automated build/release process.
Conclusion
Through this really short article, we saw how we can create a PowerShell script to automate the SharePoint Add-Ins uploading process.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes
Text
Using SharePoint CSOM with ADFS
Through this article, we are going to see how we can use SharePoint CSOM when ADFS is used for authentication. Then, we are also going to make a little side note about WSS.
Introduction
Since SharePoint 2010, SharePoint provides us a way to interact with SharePoint sites called Client Object Model, or CSOM. So we are able to write scripts or add-ins without the need to program directly on the server where SharePoint is installed.
Using CSOM is pretty straightforward, however, when ADFS is used, we can have some struggles. Just to refresh our mind, ADFS, or Active Directory Federation Services, that runs on Windows Server, allows single sign-on access (SSO) to systems and applications located across organizational boundaries.
Let's see how we can achieve a simple authentication with such a setup. We will then have a little side note about old SharePoint environments.
Using CSOM
The trick is not so complicated. The secret is in the "AuthenticationMode" property and in the "ExecutingWebRequest" event of the "ClientContext" class. Let's make it with C#:
private void _ConnectSPCSOM(string url, string user, SecureString password) { using (ClientContext context = new ClientContext(url)) { context.AuthenticationMode = ClientAuthenticationMode.Default; context.Credentials = new System.Net.NetworkCredential(user, password); context.ExecutingWebRequest += new EventHandler<WebRequestEventArgs>(_MixedAuthRequestMethod); try { Web web = context.Web; Console.WriteLine("Loading web..."); context.Load(web); context.ExecuteQuery(); Console.WriteLine(web.Title); Console.WriteLine(web.Url); } catch (Exception exception) { Console.WriteLine("Exception of type" + exception.GetType() + "caught."); Console.WriteLine(exception.Message); Console.WriteLine(exception.StackTrace); } } } private void _MixedAuthRequestMethod(object sender, WebRequestEventArgs e) { try { e.WebRequestExecutor.RequestHeaders.Remove("X-FORMS_BASED_AUTH_ACCEPTED"); e.WebRequestExecutor.RequestHeaders.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f"); } catch (Exception exception) { Console.WriteLine("Exception of type" + exception.GetType() + "caught."); Console.WriteLine(exception.Message); Console.WriteLine(exception.StackTrace); } }
C# code
As we can see, we use CSOM as usual. However, we first set the authentication mode to "default". It is important that this line is placed before the next two lines. We then pass our credentials, using a "SecureString" for the password. Then we create a new "EventHandler" that uses the "MixedAuthRequestMethod" method. This method will add the right information in the request header to force Windows Authentication.
If we have to do it with PowerShell, we can imagine doing it like so:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Client") | Out-Null [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Client.Runtime") | Out-Null Add-PSSnapin Microsoft.SharePoint.PowerShell function MixedAuthRequestMethod() { param([Parameter(Mandatory=$true)][object]$clientContext) Add-Type -TypeDefinition @" using System; using Microsoft.SharePoint.Client; namespace SPCSOM.SPOHelpers { public static class ClientContextHelper { public static void AddRequestHandler(ClientContext context) { context.ExecutingWebRequest += new EventHandler<WebRequestEventArgs>(RequestHandler); } private static void RequestHandler(object sender, WebRequestEventArgs e) { e.WebRequestExecutor.RequestHeaders.Remove("X-FORMS_BASED_AUTH_ACCEPTED"); e.WebRequestExecutor.RequestHeaders.Add("X-FORMS_BASED_AUTH_ACCEPTED", "f"); } } } "@ -ReferencedAssemblies "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.dll", "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"; [SPCSOM.SPOHelpers.ClientContextHelper]::AddRequestHandler($clientContext); } $url = "SHAREPOINT-URL" $user = "username" $password = "plain-text-password" $secpw = ConvertTo-SecureString $password -AsPlainText -Force $context = New-Object Microsoft.SharePoint.Client.ClientContext($url) $context.AuthenticationMode = [Microsoft.SharePoint.Client.ClientAuthenticationMode]::Default $credentials = New-Object System.Net.NetworkCredential($user , $secpw) $context.Credentials = $credentials.$credentials MixedAuthRequestMethod $context; if (!$context.ServerObjectIsNull.Value) { Try { $web = $context.Web $context.Load($web) $context.ExecuteQuery() Write-Host $web.Title Write-Host $web.Url } Catch { Write-Host $_.Exception.Message Write-Host $_.Exception Write-Host "Can't connect to" $url -ForegroundColor Red } } else { Write-Host "Server object is null" }
PowerShell code
Now, with this little trick, we should be able to use SharePoint CSOM with ADFS. However, we may still get a 401 error. In such a case, we probably have to check the security of our environment. For example, if there is a Reverse Proxy between us and the SharePoint Front Server, we can make a workaround by writing the Front Server IP address in our host file.
Side note
We may find ourselves in such a situation where we want to achieve what we could achieve with CSOM in an old SharePoint environment, for example WSS 3.0. Unfortunately, here, CSOM is not available. Nevertheless, there is a solution: SharePoint Web Services. There are many different Web Services that are available at URL like "https://servername/vtibin/Lists.asmx". To use a Web Service, in Visual Studio, we have to right-click on "References", then we have to choose "Add Service Reference", then "Advanced". We then have to enter the URL, validate it and click on "Add Web Reference". We can then use our Web Service like so:
private void _GetWSSCollectionsInfo(string url, string user, SecureString password) { TheWebService.Webs webs = new TheWebService.Webs { PreAuthenticate = true, Credentials = new System.Net.NetworkCredential(user, password), Url = url + "/_vti_bin/Webs.asmx" }; try { XmlNode collection = webs.GetWebCollection(); XmlNodeList nodes = collection.SelectNodes("*"); if (collection == null || collection.ChildNodes[0] == null) { return; } foreach (XmlNode node in nodes) { Console.WriteLine("Title: " + node.Attributes["Title"].Value); Console.WriteLine("Url: " + node.Attributes["Url"].Value); } } catch (Exception exception) { Console.WriteLine("Exception of type" + exception.GetType() + "caught."); Console.WriteLine(exception.Message); Console.WriteLine(exception.StackTrace); } }
C# code
Conclusion
Through this article, we saw how we can use SharePoint CSOM with ADFS. The trick is to force Windows Authentication. We saw how we can do it with C# and PowerShell. We also had a little side note about SharePoint Web Services.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
3 notes
·
View notes
Text
An overview of Angular
In this article, we are going to have a look at the Angular framework. Let's get into it!
Introduction
Nowadays, we have plenty of options to develop something that is ready for various platforms. However, Angular has made its way and it is now one of the most important actors. Let's see what it is and how it works.
We could jump right into the code of a project, but we would probably miss a few things. So, here, we are going to look at the architecture of Angular to understand the different concepts and elements this last one uses.
What is Angular?
Now, when we talk about Angular, we talk about Angular 2 or Angular 5. Angular is a complete rewrite of the AngularJS framework. Angular as a different approach from its predecessor.
Angular allows us to build applications across all platforms. It is an open-source platform that uses TypeScript. In a few words, TypeScript is a strict syntactical superset of JavaScript, and adds optional static typing to the language.
Architecture overview
Angular is written in TypeScript and it implements core and optional functionality as a set of TypeScript libraries that we can import.
An Angular application has, for building blocks, a thing called NgModules. An Angular app is defined by a set of NgModules. An app always has at least a Root Module that enables bootstrapping. An NgModule is made of Components. Every app has at least a Root Component.
Components, and things like Services, are just classes. They are, however, marked with decorators that tells Angular how to use them.
Angular provides a Router Service that helps us to define navigation paths among the different Views.
Modules
Angular apps are modular and this modularity system is called NgModules.
An NgModule defines a set of Components. An NgModule associate related code to form functional units. Every Angular app has a Root Module, conventionally named AppModule, which provides the bootstrap mechanism that launches the application.
Even if they are different and unrelated, NgModules, like JavaScript modules, can import functionality from other NgModules, and allow their own functionality to be exported and used by other NgModules. What we call Angular Libraries are NgModules.
We declare an NgModule by decorating our class with the "@NgModule" decorator. This decorator is a metadata object whose properties describe the module. The most important properties, which are arrays, are the following:
declarations - Components, Directives, and Pipes that belong to the NgModule
exports - the subset of declarations that should be visible and usable in the Components of other NgModules
imports - other modules whose exported classes are needed by Components declared in the NgModule
providers - list of the needed Services that, because they are listed here, become are available app-wide
bootstrap - the main application View, called the Root Component, which hosts all other app views. (only the Root Module should set this bootstrap property)
An NgModule provides a compilation context for its various Components. So, the Components that belong to an NgModule share a compilation context. NgModules define a cohesive block of functionality.
The Root Module of our application is the one that we bootstrap to launch the application. The application launches by bootstrapping the root AppModule. We also call it the entryComponent. The bootstrapping process creates the Components listed in the "bootstrap" array and inserts each one into the browser DOM. So, each bootstrapped Component is the base of its own tree of Components.
As we saw, we can have a Root Module, but we can have what we call Feature Modules. A Feature Module delivers a cohesive set of functionality focused on a specific application needs. We could do everything in the Root Module, but a Feature Module will help us partition our app into focused areas. However, the structure of a Feature Module is exactly the same as the one of a Root Module.
Down below, we can find an example of how could look an NgModule. Here, it is the AppModule:
// Importing Angular Libraries import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; // Importing the AppComponent import { AppComponent } from './app.component'; // Importing a custom feature module import { CustomFeatureModule } from './custom-feature-module/custom-feature-module.module'; // Declaring the Module @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, HttpModule, CustomerDashboardModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
Components
A Component controls a patch of screen called a View. The logic of a Component is defined inside a class. Angular creates, updates, and destroys Components as the user moves through the application.
A Component is identified by the "@Component" decorator that has a set of properties. The most import properties are the following ones:
selector - tells how the component is referenced in HTML; in simple words, it corresponds to the HTML tag.
templateUrl - gives the path of the HTML template.
providers - an array of Dependency Injection Providers for Services that the Component requires.
Notice that instead of the "templateUrl" property, we could use the "template" property that lets us provide the HTML template inline.
A Component has a View and this last one is defined through an HTML template. This HTML file also contains some syntactic elements that are included in Angular.
A Component will typically look like so:
@Component({ selector: 'my-component', templateUrl: './my-component.component.html', providers: [ MyService ] }) export class MyComponent implements OnInit { // Some code }
Before going any further with Components, let's take a look at a few other elements to simplify some terms that we will use later.
Services
A Service is useful to define things that can't fit into a Component and find their reason to exist in the separation of concerns. A Service is a class with a well-defined purpose. For example, we should create a Service when two or more Components or other things need to access the same data or if we want to encapsulate interactions with a web server or if we want to define how to validate user inputs.
Services are Singletons, so there is only one instance of each Service we define. They are stateless objects that can be invoked from any Components. Their purpose is to help us to divide our application into small, different logical units that can be reused.
A Service is a simple class and could look like so:
export class Logger { log(msg: any) { console.log(msg); } error(msg: any) { console.error(msg); } warn(msg: any) { console.warn(msg); } }
Dependency Injection
Dependency Injection is a large subject. Dependency Injection, also called DI, is a Design Pattern in which one or more dependencies (Services) are injected into a dependent object (Client). This pattern allows us to implement a loosely coupled architecture by separating the creation of a client's dependencies from its own behavior.
We can apply this pattern when we want to remove knowledge of concrete implementations from objects, but also when we want to get a better testable code in isolation using mock objects.
The DI Pattern is commonly used to implement the Inversion of Control Principle, which in a few words, separates the what-to-do part of the when-to-do part. In other words, it is about letting somebody else handles the flow of control. It is based on the Hollywood Principle: "Don't call us, we'll call you".
Dependency Injection could be achieved by using the "constructor" of a class or "setter" methods. It can also be achieved with a Container that handles the instantiation of other objects.
In Angular, DI is widely used and we can take a moment to dig a little into it.
Angular uses its own Dependency Injection framework that basically uses three things:
The Injector, that exposes APIs. It is responsible for creating Service instances and injecting them into classes.
The Provider, that tells the Injector how to create an instance of a dependency.
The Dependency, the type of which an object should be created.
Angular has a Hierarchical Dependency Injection system. There is a tree of Injectors that parallels an application's Component tree. An application may have multiple Injectors. That means we can configure Providers at different levels:
For the whole application when bootstrapping it. All sub Injectors will see the Provider and share the instance associated with. It will always be the same instance.
For a specific Component and its sub Components. Other Components won't see the Provider.
For Services. They use one of the Injectors from the element that calls the Service chain.
When using DI with Angular, we will mainly see the "@Injectable" decorator. This decorator marks a class as available to Injector for creation.
In an Angular app, Components consume Services. A Component shouldn't create a Service. So, we inject the different required Services into the different Components. When Angular creates a new instance of a Component class, it determines which Services or other dependencies that Component needs by looking at the types of its constructor parameters. When Angular discovers that a Component needs a Service, it checks if the Injector already has any existing instances of that same Service. If an instance of that requested Service doesn't exist, the Injector makes one using the registered Provider and adds it to the Injector before returning the Service to Angular.
A Provider is a recipe for creating a dependency. We must at least register one Provider of any Service we want to use. It can be done in Modules or in Components. Doing this in a Module allows Angular to inject the corresponding Services in any class it creates and so the Service instance lives for the life of the app. By using a Component Provider we restrict the scope of the Service and so it will only be injected into that Component instance or one of its descendant Component instances. It means that Angular can't inject the same Service instance anywhere else. The lifetime of this Service will also be different: the Service instance will be destroyed when the Component instance is destroyed.
Here is how we can inject a Service in a Component:
import { Injectable } from '@angular/core'; import { Logger } from './logger'; @Injectable() export class Logger { log(msg: any) { console.log(msg); } error(msg: any) { console.error(msg); } warn(msg: any) { console.warn(msg); } }
logger.service.ts file
import { Component } from '@angular/core'; import { Logger } from './logger'; @Component({ selector: 'my-component', templateUrl: './my-component.component.html', providers: [ Logger ] }) export class HeroListComponent implements OnInit { constructor(private logger: Logger {} }
my-component.component.ts file
We could also do it with the Root Module like so:
@NgModule({ providers: [ Logger ] })
app.module.ts file
import { Component } from '@angular/core'; import { Logger } from './logger'; @Component({ selector: 'my-component', templateUrl: './my-component.component.html', }) export class MyComponent implements OnInit { constructor(private logger: Logger {} }
my-component.component.ts file
We can also imagine the a Service needs another Service:
import { Injectable } from '@angular/core'; import { Logger } from './logger.service'; @Injectable() export class MyService { constructor(private logger: Logger) { } }
Data Binding
Basically, data bindings allow properties of two objects to be linked so that a change in one causes a change in the other. It establishes a connection between the user interface and the underlying application. It defines a relationship between two objects: a source object that will provide data and a target object that will use the data from the source object. The benefit of data binding is that we no longer have to worry about synchronizing data between our Views and data source.
With Angular, the most common way to display a Component property is to bind that property name through interpolation. Interpolation is the estimation of a value or a set of values based on their context. It allows to evaluate a string containing one or more placeholders and to replace those placeholders with a computed and corresponding value. The context is typically the Component instance. So, basically, in Angular, to achieve this, we have to put the property name in the View enclosed in double curly braces. It will be something like so:
<h1>{{title}}</h1>
Most of the time, bindings are used to connect the visuals of an application with an underlying data model, usually in a realization of the MVVM Pattern (Model-View-ViewModel) or the MVC Pattern (Mode-View-Controller). In Angular, the Component plays the part of the Controller/ViewModel, and the template represents the View.
Angular provides many kinds of data binding. Binding types can be grouped into three categories distinguished by the direction of the data flow: source-to-view, view-to-source and two-way sequence: view-to-source-to-view. When we use binding types other than interpolation, we have to specify a target name that is the name of a property. It looks like an attribute name, but it is not. With data binding, we are not working with HTML attributes, but properties of DOM (Document Object Model) elements. Just to refresh our minds, we may say that attributes are defined by HTML and properties are defined by DOM and the responsibility of HTML attributes is just to initialize DOM properties. Later DOM properties can change, but HTML attributes cannot. Some DOM properties don't have corresponding attributes and some HTML attributes don't have corresponding properties. The target of a data binding is something in the DOM.
import { Component } from '@angular/core'; @Component({ selector: 'my-component', templateUrl: './my-component.component.html', }) export class MyComponent { imgSrc: String = 'path-to-image'; }
my-component.component.ts file
<img [src]="imgSrc">
my-component.component.html file
We often say that property binding is one-way data binding because it flows a value in one direction, from a Component's data property into a target element property. However, we are allowed to achieve something called two-way data binding that, for example, lets us display a data property and update that property when the user makes changes. We can do this by using the syntax "[(x)]".
We are also able to achieve event binding:
export class MyComponent { doSomething() { // some code } }
my-component.component.ts file
<button (click)="doSomething()">Do something</button>
my-component.component.html file
Input and Output
In a Component, we can use two decorators on properties: "@Input" and "@Output".
An Input property is a settable property. An Output property is an observable property. Input properties usually receive data values. Output properties expose Event producers.
Declaring an Input property would give something like so:
export class MyComponent { @Input() name: String; }
my-component.component.ts file
<my-component name="foo"></my-component>
my-component.component.html file
An Output property almost always returns an Angular EventEmitter. An EventEmitter allows us to emit a custom Event. It is helpful to pass a value to a parent Component. Let's say that we have something like this:
export class MyComponent { @Output() deleteItemRequest = new EventEmitter<Item>(); delete() { this.deleteItemRequest.emit(this.item) } }
my-component.component.ts file
<button (click)="delete()">Delete</button>
my-component.component.html file
As we can see, here, we use event binding. So, when the button is clicked, we call the "delete()" method. In the Component, we also declare an Output property that returns an EventEmitter and we declare its underlying type as "Item". So, when the "delete()" method is called, we use this EventEmitter to emit a new Event. In fact, it will emit an "Item" object.
So, we can now imagine that we have the following thing as a parent Component:
export class ParentComponent { deleteItem(item: Item) { // Some code } }
parent-component.component.ts file
<parent-component (deleteItemRequest)="deleteItem($event)"></parent-component>
parent-component.component.ts file
When the child Component emits its Event, the parent Component will use the result of this same Event with its own method.
Component Lifecycle Hooks
Angular manages the lifecycle of the different Components. Through different Hooks, it provides a way to perform actions when those different moments occur. We access to those moments by implementing one or more of the lifecycle Hook interfaces in the Angular core library. Each interface has a single Hook method whose name is the interface name prefixed with "ng".
Down below, we have an example of a Component using the "OnInit" interface:
export class MyComponent implements OnInit { ngOnInit() { // Some code } }
Communication between parent and child Components
There are a few ways to make a parent and child Component interact. One way is to inject the child Component into the parent as a ViewChild. This could be achieved like so:
import { ViewChild } from '@angular/core'; import { Component } from '@angular/core'; import { ChildComponent } from './child-component.component'; export class ParentComponent { @ViewChild(ChildComponent) private childComponent: ChildComponent; method1() { this.childComponent.childMethod1(); } method2() { this.childComponent.childMethod2(); } }
Another way to make a parent and child Component interact is to make them share a Service.
Directives
In Angular, there are three kinds of Directives:
Components - Directives with a template
Structural Directives - change the DOM layout by adding and removing DOM elements
Attribute Directives - change the appearance or behavior of an element, Component, or another Directive
We have already seen Components. They are the most common Directives.
Structural Directives change the structure of the View. They are things like "NgFor" or "NgIf". Here is an example of different Structural Directives:
<div *ngIf="character" class="name">{{character.name}}</div> <ul> <li *ngFor="let character of characters">{{character.name}}</li> </ul> <div [ngSwitch]="character?.size"> <app-big-character *ngSwitchCase="'big'" [character]="character"></app-big-character> <app-medium-character *ngSwitchCase="'medium'" [character]="character"></app-medium-character> <app-small-character *ngSwitchCase="'small'" [character]="character"></app-small-character> <app-character *ngSwitchDefault="'small'" [character]="character"></app-character> </div>
Attribute Directives are used as attributes of elements. They are things like "NgClass" or "NgStyle". Here is an example of different Attribute Directives:
<div [ngStyle]="currentStyles"> Some content. </div> <div [class.error]="hasError">Some error</div>
Let's make a little side note for the "NgModel" Directive that is part of the "FormsModule". This Directive helps us when we want to display a data property and update that property when the user makes changes through a form. Using this two-way data binding makes this easier. It will map the various fields of our form to our Data Model. It will ensure that the data in the View and the data in our Data Model are synced.
We can use this Directive like so:
export class MyComponent { name: string; }
my-component.component.ts file
<input type="text" [(ngModel)]="name" />
my-component.component.html file
We are also able to build Attribute Directives. We just have to create a class annotated with the "@Directive" decorator.
Pipes
Pipes are a way to operate some transformations over data before displaying them. Angular comes with several built-in Pipes. For example, we can have something like so:
<p>The character's birthday is {{ birthday | date:"MM/dd/yy" }}</p>
We are also able to create our own Pipes by using the "@Pipe" decorator and implementing the "PipeTransform" interface. This could be done like so:
import { Pipe, PipeTransform } from '@angular/core'; @Pipe({name: 'exponentialStrength'}) export class ExponentialStrengthPipe implements PipeTransform { transform(value: number, exponent: string): number { let exp = parseFloat(exponent); return Math.pow(value, isNaN(exp) ? 1 : exp); } }
Observables
Observables provide support for passing messages between Publishers and Subscribers in our application. An Observable can deliver multiple values of any type.
A Publisher must create an Observable instance. This object defines a function that is executed when a consumer calls the "subscribe()" method. This method, called the subscriber function, states how to get data to be published. To execute our Observable, we have to call its "subscribe()" method and pass it an Observer. This object implements the "Observer" interface and is responsible to handle the various notifications from the Observable.
To use Observables, we need to import the RxJS Library. RxJS is a library for reactive programming, which is a declarative programming paradigm where we program with asynchronous data streams. Data streams can be anything and we are so able to listen to them and react accordingly. A stream is a sequence of continuous Events ordered in time and it can emit three different things: a value of some type, an error or a "completed" value. Asynchronously, we capture these different emitted events by defining functions: one that will execute when a value is emitted, another that will execute when an error is emitted and another one that will execute when "completed" is emitted. The action of listening to the stream is named "subscribing". The various functions we define are the "Observers" while the stream is the "subject" or the "Observale". This is the Behavioral Design Pattern called the Observer Pattern. We also have to deal with the "Operators" which are the various pure functions, functions that always evaluate the same result value when we give them the same argument value, that will let us work on the emitted values.
This kind of programming is really helpful when we have to deal with various UI Events related to data Events. It helps us to achieve real-time apps.
Let's imagine that we have a Service that is responsible to fetch users:
import { Observable } from 'rxjs/Rx' import { Injectable } from '@angular/core' import { Http, Response } from '@angular/http' @Injectable() export class UsersService { constructor(public http: Http) {} public fetchUsers() { return this.http.get('/api/users').map((res: Response) => res.json()) } }
Our method "fetchUsers()" returns an Observable, our subject. So, we can subscribe to our subject like so:
import { Component } from '@angular/core' import { Observable } from 'rxjs/Rx' import { UsersService } from './users.service' import { User } from './user' @Component({ selector: "my-component", templateUrl: "./my-component.component.html", providers: [ UsersService ] }) export class MyComponent { public users: Observable<User[]> constructor(public usersServce: UsersService) {} public ngOnInit() { this.users = this.UsersService.fetchUsers() } }
In our template file, we have to do the following things:
<ul class="user-list" *ngIf="(users | async).length"> <li class="user" *ngFor="let user of users | async"> {{ user.name }} </li> </ul>
We may also want to create an Observable from a Promise. We can do it like so:
const data = fromPromise(fetch('/api/endpoint'));
This create an Observable. To subscribe, we have to do the following thing:
data.subscribe({ next(response) { console.log(response); }, error(err) { console.error('Error: ' + err); }, complete() { console.log('Completed'); } });
Here, we achieve the process of subscription and as we can see, we define the three functions that we talked about a little earlier.
Forms
We can use Angular event bindings to respond to Events that are triggered by user input. For example, we can imagine the following situation:
export class MyComponent { values = ''; onKey(event: any) { this.values += event.target.value; } }
my-component.component.ts file
<input (keyup)="onKey($event)"> <p>{{values}}</p>
my-component.component.html file
Angular has also a whole "Form" library that helps us with many things. We can, for example, use it to add some validation rules to our forms.
<input id="name" name="name" class="form-control" required minlength="4" [(ngModel)]="user.name" #name="ngModel" > <div *ngIf="name.invalid && (name.dirty || name.touched)" class="alert alert-danger"> <div *ngIf="name.errors.required"> Name is required. </div> <div *ngIf="name.errors.minlength"> Name must be at least 4 characters long. </div> </div>
Here, we start by defining a input with a few rules. As we can see, we export the "ngModel" Directive to achieve two-way data binding. We also export the form control's state to a local variable "#name". Then, we check if the control has been touched and we display different errors if there are some.
With Angular, we also have the ability to dynamically generate forms. To achieve this, we have to create objects derived from the base class "QuestionBase" and that represents the various controls of our forms. We can then treat them through a Service that will build the form and return it as a "FormGroup" object.
Routing & Navigation
In Angular, the Router allows navigation from one View to the next. The Router interprets a browser URL to navigate to a client generated View and, if needed, pass optional parameters. The Router can be bound to links or it can be used in response to some actions.
To use the Router correctly, we need to add a "base" element to our "index.html" file. We also need to import the Router Module. In our "app.module.ts" file, we can do the following thing:
import { RouterModule, Routes } from '@angular/router'; const appRoutes: Routes = [ { path: 'characters', component: CharactersComponent }, { path: 'character/:id', component: CharacterDetailComponent }, { path: '', redirectTo: '/characters', pathMatch: 'full' }, { path: '**', component: PageNotFoundComponent } ]; @NgModule({ imports: [ RouterModule.forRoot(appRoutes) ] }) export class AppModule { }
As we can see, we define our navigation Routes in the array "appRoutes" and we pass this array to the "RouterModule". We can now use the "RouterOutlet" Directive, that marks where the Router displays a View, to create some kind of navigation menu:
<nav> <a routerLink="/characters" routerLinkActive="active">Characters</a> </nav> <router-outlet></router-outlet>
After the end of each successful navigation lifecycle, the Router builds a tree of "ActivatedRoute" objects that make up the current state of the Router. We are able to access the current "RouterState" from anywhere in the application using the Router Service and the "routerState" property.
Conclusion
Through this article, we got a brief overview of the Angular technology. It was more a theoretical post than a practical example. Of course, we didn't cover entirely each subject and there are plenty of other subjects that we could have explored like Unit Testing or E2E Testing. Now, however, we have enough knowledge of Angular to start a project and to dig deeper into this framework.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
8 notes
·
View notes
Text
Memoization
Through this article, we are going to look at "memoization".
Introduction
The word "memoization" seems to be misspelled, but in fact it is not. It is an optimization technique to speed up a program. Let's see how it works.
Definition
The word "memoization" seems to be derived from the Latin word "memorandum" ("to be remembered"). This technique consists, for a given function, in storing previously computed results to increase performances. The idea is to remember the computed result for certain inputs and to use the memorized result, instead of recomputation if these specific inputs get used once again.
We can see memoization as some kind of caching technique, but there is a tiny difference: with memoization, which is a specific form of caching, we store the returned value of a function based on its parameters. With caching, we speak of a more general domain that includes any output-buffering strategy.
Example
Let's make an example using what we know. Consider the following function:
const factorial = n => { return n === 0 || n === 1 ? 1 : n * factorial(n-1) }
We can see this is a factorial function. However, calling this function multiple times for the same value of "n" can damage performances. That's where we can make use of memoization. Let's modify our function like so:
const factorial = n => { let cache = {} if (n in cache) { return cache[n] } if (n === 0 || n === 1) { return 1 } return cache[n] = n * factorial(n-1) }
As we can see, now, in our function, we store the different computed values in a really simple manner.
Now, let's go a little further. What if we want to use memoization with multiple functions?
const memoize = f => { let cache = {} return (args) => { let n = arg if (n in cache) { return cache[n] } return cache[n] = f(n) } } const factorial = memoize(n => n === 0 || n === 1 ? 1 : n * factorial(n-1)) const fibonacci = memoize(n => n === 0 || n === 1 ? n : fibonacci(n-1) + fibonacci(n-2))
Here, we now have a "memoize" function that wraps any function to create a "memoized" version. We have to notice that we only manage one argument.
Conclusion
Through this article, we saw the meaning of the word "memoization" and how, here with JavaScript, we can use such a concept.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
1 note
·
View note
Text
JavaScript Generators
In this article, we are going to take a look at Generators in JavaScript.
Introduction
JavaScript Generators were introduced by ES6. It is a type of function that can be entered and exited a number of times. Let's see what it really is.
Definition
When a standard JavaScript function is called, it runs until it reaches a return value or the end of the function. Generators allow us to run just one part of a function, then exit it to come back later and restart it exactly where we left. They give us the control over the flow of a function.
To pause such a function, we have to use the "yield" keyword inside this function. The function will be exited. To re-enter the same function, we have to use the "next()" method. Each time we exit a function, we can gather some information. We have access to the value that was possibly returned and the status of the Generator function.
Example and explanation
Let's make a simple example of a Generator function.
function* mario() { console.log("It's-a Me, Mario!"); console.log("Jumping"); console.log("Eating a mushroom") yield "pause"; console.log("Running"); yield "pause"; console.log("Falling into lava"); yield "game over"; } const marioValues = mario(); let gen; gen = marioValues.next(); // Output: It's-a Me, Mario!; Jumping; Eating a mushroom console.log(gen.value, gen.done); // Output: pause false gen = marioValues.next(); // Output: Running console.log(gen.value, gen.done); // Output: pause false gen = marioValues.next(); // Output: Falling into lava console.log(gen.value, gen.done); // Output: game over false gen = marioValues.next(); // Output: - console.log(gen.value, gen.done); // Output: undefined true
As we can see, we declare a Generator by using the "*" symbol. Here, we have three "yield" statements and we can see that each time one of them is reached, the function is exited.
We can notice that the "done" value turns to "true" after we called the "next()" method for the fourth time. At this time only, the function is complete and the "value" turns to "undefined". We can correct this by replacing the last "yield" by a "return". In this case, it will work. However, in the case of a "for..of" loop, the value returned by the "return" keyword would be thrown away.
When we call the Generator, the function is not executed. In fact, an iterator object is created that will let us interact with the Generator. As soon as the iterator is created, the Generator goes into a suspended state.
Conclusion
Through this small article, we saw what Generators are in JavaScript. We saw how to use them through a really basic example.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes
Text
An overview of Sass Maps
In this article, we are going to have an overview of Sass Maps.
Introduction
Sass brought a lot of interesting features that help us to create style sheets more easily and more elegantly. One of those features are the Maps. Let's see what they are.
Definition
Basically, Maps are like associative arrays. A Map has a name and one or more unique key associated with one value. A Sass Map looks like this:
$map: ( key: value, another-key: another-value );
Examples
So, let's declare a Map to make a few examples:
$colors: ( red: #cc3120, green: #3fcc41, blue: #4286f4 );
Accessing a value
We can access a value like so:
.element { background-color: map-get($colors, red); }
Here, we use the function "map-get()" that takes, as arguments, the name of the Map and the key we want to target.
Checking a value
We can check if a value exists like so:
.element { @if map-has-key($colors, orange) { background-color: map-get($colors, orange); } @else { background-color: $default; } }
So, knowing that, it is easy to create such a function:
@function color($key) { @if map-has-key($colors, $key) { @return map-get($colors, $key); } @warn "Unknown `#{$key}` in $colors."; @return null; } .element { background-color: color(red); }
Nested Maps
Let's go a little further and imagine something like this:
$colors: ( red: ( color: #f4cfcb, background: #cc3120 ), green: ( color: #b8d6b8, background: #3fcc41 ), blue: ( color: #bacff2, background: #4286f4 ) );
Here, we have a nested Map. We can use it like so:
@function color($color, $attribute) { @return map-get(map-get($colors, $color), $attribute); } .element { color: color(green, color); }
Creating loops
With Maps, we can easily create loops like so:
// Map $sections:( 'red-section': ( 'background': #cc3120, 'color': #ffffff ), 'green-section':( 'background': #3fcc41, 'color': #ffffff ) ); // Function @function color($map, $section, $attribute) { @if map-has-key($map, $section) { @return map-get(map-get($map, $section), $attribute); } @warn "The key ´#{$section} is not available in the map."; @return null; } // Loop @each $key, $val in $sections { @if map-has-key($sections, $key) { .#{$key} { background-color: color($sections, $key, background); color: color($sections, $key, color); } } }
Conclusion
Through this brief article, we saw what Sass Maps are. We had a few examples of how to use them and where they could be helpful.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes
Text
Connecting to SQL Server
Through this article, we are going to see how we can install SQL Server and connect an application to this last one.
Introduction
Microsoft SQL Server is a relational database management system developed by Microsoft. Here, we are going to have a look at how we can install it and how an ASP.NET MVC application using the Entity Framework can connect to this last one.
Installing SQL Server
Head to https://www.microsoft.com/en-us/sql-server/sql-server-downloads. Here, we are going to choose the Developer Edition. When the download is achieved, we just have to install it. So double-click on the installer and then choose the basic installation.
We may encounter a problem during the installation giving us the error 1638. There are two solutions: we can uninstall Visual Studio 2017 completely, install SQL Server then reinstall Visual Studio 2017 or uninstall Microsoft Visual C++ 2017 Redistributable (x86) and (x64), install SQL Server then reinstall Microsoft Visual C++ 2017 Redistributable (x86) and (x64).
At the end of the installation, notice that several details are given, like the Connection String. We also have the opportunity to directly install SQL Server Management Studio (SSMS). Here, we are going to do this also.
Setting up the project
For our example, we are going to do a really simple application. Let's head to Visual Studio and choose "New Project > Visual C# > Web > ASP.NET We Application (.NET Framework)". After we entered a name for our application, we can go to the next step and chose the "MVC" option. We can check the box for unit tests if we want to generate a second assembly for unit testing.
Let's create a Model, named "Movie.cs" and fill it like so:
public class Movie { public int Id { get; set; } public string Name { get; set; } }
Models/Movies.cs file
We can now create a folder named "DAL" and place a file named "MovieContext.cs" into it. Let's insert the following code inside that file:
using System; using System.Collections.Generic; using System.Data.Entity; using System.Data.Entity.ModelConfiguration.Conventions; using System.Linq; using System.Web; using SQLServerApp.Models; namespace SQLServerApp.DAL { public class MovieContext : DbContext { public MovieContext() : base("MovieContext") { } public DbSet<Movie> Movies { get; set; } } }
DAL/MovieContext.cs
Entity Framework and Migrations
We can now install the Entity Framework like so:
PM> Install-Package EntityFramework -Version 6.2.0
Installing the Entity Framework
We also have to set up our Connection String in the "Web.config" file:
<connectionStrings> <add name="MovieContext" connectionString="Data Source=OUR_SERVER\INSTANCE;Initial Catalog=MoviesDB;Integrated Security=True;MultipleActiveResultSets=True" providerName="System.Data.SqlClient"/> </connectionStrings>
Web.config file edited
Here, to make things work, we have to target the right server and the right instance.
We can now enable Migrations:
PM> enable-migrations
Enabling migrations
In the newly created "Migrations" folder, we can find a file named "Configuration.cs". Just for our example, let's place the following code into the "Seed()" method:
... using System.Collections.Generic; ... using SQLServerApp.Models; protected override void Seed(SQLServerApp.DAL.MovieContext context) { var movies = new List<Movie> { new Movie{Name="2001: A Space Odyssey"}, new Movie{Name="Gattaca"}, new Movie{Name="Interstellar"}, new Movie{Name="Cloud Atlas"} }; movies.ForEach(e => context.Movies.AddOrUpdate(p => p.Name, e)); context.SaveChanges(); }
Migrations/Configuration.cs
Let's now create a Migration and update the database:
PM> add-migration Init PM> update-database
Creating a Migration and updating the database
Check
We can use the SQL Server Object Explorer to check if everything is right. We first have to add a server. Let's click on "Add SQL Server" and choose the targeted one. We can now see our database and our data. We can achieve the same thing through SQL Server Management Studio by right-clicking our database's tables.
Conclusion
Through this brief article, we saw how to install SQL Server and how we can link an ASP.NET MVC application using the Entity Framework to this relational database management system.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes
Text
Async and Await in C#
In this article, we are going to look at the keywords "async" and "await" in C#. Let's get into it!
Introduction
Using asynchronous programming allows us to enhance the responsiveness of our application. It helps us in case we have a potentially blocking activity where the entire application has to wait before continuing. Using an asynchronous process lets the application continue its work that doesn't depend on the blocking activity until this task is finished. Let's see how we can put that in place.
Tasks
To achieve asynchronous programming with C#, we need to be aware of Tasks.
A Task represents an asynchronous operation. It is an operation we want to perform and that is executed in the background. A Task is something like a promise, or a future, that says something like "I will return this a little bit later". Tasks can be chained to be executed one after the other.
A Task mustn't be confused with a Thread. Thread is a lower-level concept. A Thread is a way of fulfilling a promise, but not every Task needs a brand-new Thread. A Task can return a result, while there is no direct mechanism to return a result from a Thread.
If we are familiar with JavaScript Promises, Tasks are quite the same.
Example and explanation
Let's imagine the following code:
private async void Button_Click(object sender, RoutedEventArgs e) { textBox.Text = "Wait..."; // ... DOING SOME OPERATIONS string result = await SimpleMethodAsync(); textBox.Text = result; } public async Task<string> SimpleMethodAsync() { // ... DOING SOMETHING await Task.Delay(2000); return "Finished"; }
As we can see, here we use three words that we had already mentioned: "async", "await", "Task". So, what does happen here? We can see that we use the keyword "async" in front of the "Button_Click()" method. It says that we define an asynchronous method. This also means that we are able to wait for something. So, when the method "Button_Click()" is called, responding to a click event, it is executed synchronously until it reaches the keyword "await". By that time, the "SimpleMethodAsync()" will be called and if other independent work can be done, it is operated. The "await" keyword tells the compiler that we ultimately need the result of the "SimpleMethodAsync()" method, but we don't need to block on that call.
The "SimpleMethodAsync()" also has the "async" keyword in front of it. This method returns a string and we specify it with the return type "Task\". After our "return" statement, the "Button_Click()" method can get the result.
An important note is that even though returning "void" in an "async" method is allowed, it should not be used in most cases. The other two return types, "Task" and "Task\", represent "void" and "T", after the awaitable method completes and returns result. The use of void as return type should be only limited for event handlers.
In other words
The "async" keyword enables the "await" keyword in a method and changes how the method result is handled. An "async" method is executed like any other method, synchronously, until it reaches the "await" keyword. At this point, things get asynchronous. "await" takes a single argument, an "awaitable", something that is an asynchronous operation. "await" check if the "awaitable" has already completed and if it is the case, the method just continues synchronously. Otherwise, it tells the "awaitable" to run the remainder of the method when it completes, and then returns from the "async" method. When the "awaitable" completes, the remainder of the "async" method is executed.
Conclusion
Through this article, we saw the meaning of the "async" and "await" keywords in C#. We had a basic overview of how asynchronous operations work and how we can use them.
One last word
If you like this article, you can consider supporting and helping me on Patreon! It would be awesome! Otherwise, you can find my other posts on Medium and Tumblr. You will also know more about myself on my personal website. Until next time, happy headache!
0 notes