Should a method name describe what it does or what it intends?

Bob Martin raises a good example in InformIT: Robert C. Martin’s Clean Code Tip of the Week #1: An Accidental Doppelgänger in Ruby > Duplication of two functions that do the same thing but mean different things by it.

I recently stumbled into a slightly different take on the question of should a function say what it does or what it intends?

When a  function implements business process Alpha that today consists of steps A and B (but tomorrow  may change) should you call the function DoBusinessProcessAlpha or call it DoStepsAandB?

One answer would be, if the function is in a public package which exposes business functionality then the name should probably show that it does BusinessProcessAlpha. But if it is a private, not exposed, function then the reader is probably looking for the detail, that it does StepsAandB.

The question is more awkward if Steps A and B are themselves business process functions. That is, if you asked your customer, they would understand what steps A and B mean.

I suppose in that case you could always call it DoAlphaAsStepsAandB().

Default timeouts in .Net code. What are they if you don’t specify?

What are the default timeouts in .Net code if you don't specify one? I realised I didn't know, when I got timeouts for an HttpClient calling a WCF service calling a SQL query. The choices are all reasonable. But they're all different. So here's a list.

System.Net.Http.HttpClient

.Timeout Default 100 seconds
The entire time to wait for the request to complete.
https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpclient.timeout?view=netframework-4.5 (if your url requires a DNS call and you set timeout < 15 seconds, your timeout may be ineffective; it may still take up to 15 seconds to timeout.)

System.Data.SqlClient SqlConnection & SqlCommand

SqlConnection.ConnectionTimeout Default 15 seconds

The timeout to wait for a connection to open.
https://docs.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.connectiontimeout?view=netframework-1.1

SqlCommand.CommandTimeout Default 30 seconds

The wait time before terminating the attempt to execute a command and generating an error.

https://docs.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.commandtimeout?view=netframework-1.1

Remarks & Notes

  • A value of 0 indicates no limit (an attempt to execute a command will wait indefinitely).
  • The CommandTimeout property will be ignored during asynchronous method calls such as BeginExecuteReader.
  • CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string).
  • This is the cumulative time-out (for all network packets that are read during the invocation of a method) for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time. For example, with a 30 second time out, if Read requires two network packets, then it has 30 seconds to read both network packets. If you call Read again, it will have another 30 seconds to read any data that it requires.

WCF

IDefaultCommunicationTimeouts

This interface does not, of course, set any default values, but it does define the meaning of the four main timeouts applicable to WCF.

System.ServiceModel.Channels.Binding Default values

Remember, timeouts in WCF apply at the level of the Binding used by the client or service. (Think about it. It makes sense).

So the Binding class defaults affect all your WCF operations unless a specific binding subclass changes it. Subclasses include: BasicHttpBinding, WebHttpBinding, WSDualHttpBinding, all other HttpBindings, all MsmqBindings, NetNamedPipeBinding, NetPeerTcpBinding, NetTcpBinding, UdpBinding, and CustomBindings.
https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.binding?view=netframework-3.0

The defaults are:
OpenTimeout 1 minute
CloseTimeout 1 minute
SendTimeout 1 minute
ReceiveTimeout 10 minutes

However whilst some bindings - basicHtp, netTcp– specify the same—1 min, 1min, 1min, 10 minutes—as Binding base class …

WCF Service Timeouts for webHttpBinding, wsDualHttpBinding and other bindings

The documentation for these bindings contradict (or should I say, override) the documentation for the framework classes and say that all four timeouts, including ReceiveTimeout, default to 1 minute. It could be a typo, I haven't tested. See all the various bindings at https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/wcf/bindings.

webHttpBinding vs basicHttpBinding Reminder:
webHttpBinding is for simple (so-called rest-style) HTTP requests as opposed to Soap. basicHttpBinding is for SOAP, i.e.conforming to the WS-I Basic Profile 1.1.

WCF Client Timeouts

A WCF client uses three of these Timeout settings. Since the default value is set by the Binding, what remains is to clarify the definitions.

  • SendTimeout : used to initialize the OperationTimeout, which governs the whole process of sending a message, including receiving a reply message for a request/reply service operation. This timeout also applies when sending reply messages from a callback contract method.
  • OpenTimeout – used when opening channels
  • CloseTimeout – used when closing channels
    ReceiveTimeout is meaningless for a client and is not used.

https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/configuring-timeout-values-on-a-binding

WCF Serverside Timeouts

A WCF service uses all four Timeout settings. Three have the same definition as a WCF Client. The fourth is:

WCF using a Binding with reliableSession

Some System.ServiceModel.Bindings allow the use of ReliableSession behaviour, which adds another timeout:

InactivityTimeout defaults to 10 minutes.

https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.reliablesessionbindingelement.inactivitytimeout?view=netframework-3.0.

Remarks

  • Activity on a channel is defined as receiving an application or infrastructure message. The inactivity timeout parameter controls the maximum amount of time to keep an inactive session alive. If more than InactivityTimeout time interval passes with no activity, the session is aborted by the infrastructure and the channel faults. The reliable session is torn down unilaterally.
  • If the sending application has no messages to send then the reliable session is normally not faulted because of inactivity; instead a keep-alive mechanism keeps the session active indefinitely. Note that the dispatcher can independently abort the reliable session if no application messages are sent or received. Thus, the inactivity timeout typically expires if network conditions are such that no messages are received or if there is a failure on the sender.

but:

WCF Server using HttpBinding with reliableSession implemented as Connection: Keep-Alive HTTP header

https://social.msdn.microsoft.com/Forums/vstudio/en-US/d8a883dc-c47d-4912-b23b-2dfd0c2557cb/wcf-server-side-timeout?forum=wcf

BasicHttpBinding does not use any kind of session so receiveTimeout should be irrelevant.
BasicHttpBinding can use HTTP persistent connection. Persistance is provided by Connection: Keep-Alive HTTP header which allows sharing single TCP connection for many HTTP requests/responses. There appears to be no way to change the timeout associated with this header, and IIS appears to always timeout at 100 seconds of inactivity. IIS's keep-alive default timeout value is 120s, but changing this seems to have no effect on the WCF service.

The interesting thing is that closing proxy/channel on the client side does not close the TCP connection. The connection is still opened and prepared to be used by another proxy to the same service. The connection closes when 100s inactivity timeout expires or when application is terminated. Btw. there is RFC which defines that max. two such TCP connections can exists between client and single server (this is default behavior in windows but can be changed).

You can turn off HTTP persistent connection if you implement cutomBinding and set keepAliveEnabled="false" in httpTransport element. This will force client to create new TCP connection for each HTTP request.

IIS Timeouts

https://docs.microsoft.com/en-us/iis/configuration/system.applicationhost/weblimits

connectionTimeout: Default 2 minutes.

Specifies the time (in seconds) that IIS waits before it disconnects a connection that is considered inactive. Connections can be considered inactive for the following reasons:

  • The HTTP.sys Timer_ConnectionIdle timer expired. The connection expired and remains idle.
  • The HTTP.sys Timer_EntityBody timer expired. The connection expired before the request entity body arrived. When it is clear that a request has an entity body, the HTTP API turns on the Timer_EntityBody timer. Initially, the limit of this timer is set to the connectionTimeout value. Each time another data indication is received on this request, the HTTP API resets the timer to give the connection more minutes as specified in the connectionTimeout attribute.
  • The HTTP.sys Timer_AppPool timer expired. The connection expired because a request waited too long in an application pool queue for a server application to dequeue and process it. This time-out duration is connectionTimeout.

headerWaitTimeout : Default 0 seconds
ToDo: Does this mean none, or does it mean no timeout until the connectionTimeout is hit?

IIS Asp.Net HttpRuntime

executionTimeout: 110 seconds in .Net framework 2.0 & 4.x. In the .NET Framework 1.0 and 1.1, the default is 90 seconds

IIS WebSockets

pingInterval: default is 0 seconds.

IIS Classic Asp

queueTimeout : default value is 0.
The maximum period of time (hh:mm:ss) that an ASP request can wait in the request queue.

scriptTimeout : default value is 1 minute 30 seconds
The maximum period of time that ASP pages allow a script to run run before terminating the script and writing an event to the Windows Event Log.

IIS FastCGI

https://docs.microsoft.com/en-us/iis/configuration/system.webserver/fastcgi/application/index

activityTimeout: The default value in IIS 7.0 is 30seconds ; the default for IIS 7.5 is 70 seconds.
The maximum time, in seconds, that a FastCGI process can take to process.

idleTimeout: default 300 seconds.
The maximum amount of time, in seconds, that a FastCGI process can be idle before the process is shut down

requestTimeout: default 90 seconds
The maximum time, in seconds, that a FastCGI process request can take.

Http Server 408 Request Timeout

https://tools.ietf.org/html/rfc7231#section-6.5.7

The 408 (Request Timeout) status code indicates that the server did not receive a complete request message within the time that it was prepared to wait. A server SHOULD send the "close" connection option (Section 6.1 of [RFC7230]) in the response, since 408 implies that the server has decided to close the connection rather than continue waiting. If the client has an outstanding request in transit, the client MAY repeat that request on a new connection.

See Also

Bash and PowerShell in a single script file

I'm not saying it's all dotnet’s fault, but it was when deploying dotnetcore services to a linux VM that I thought, “what I really, really want is both bash and powershell setup scripts in a single file”. Surely a working incantation can be crafted from such arcane systems of quoting and escaping as the two languages offer?

½ an evening later :

# This file has a bash section followed by a powershell section,
# and a shared section at the end.
echo @'
' > /dev/null
#vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
# Bash Start --------------------------------------------------

scriptdir="`dirname "${BASH_SOURCE[0]}"`";
echo BASH. Script is running from $scriptdir

# Bash End ----------------------------------------------------
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
echo > /dev/null <<"out-null" ###
'@ | out-null
#vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
# Powershell Start --------------------------------------------

$scriptdir=$PSScriptRoot
"powershell. Script is running from $scriptdir"

# Powershell End ----------------------------------------------
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
out-null

echo "Some lines work in both bash and powershell. Calculating scriptdir=$scriptdir, requires separate sections."

It relies on herestring quoting being different for each platform, as is the escape character ( \ vs ` ). Readibility (ha!) is very much helped by

#comments begin with a hash 

being common to both, so I can do visible dividers between the sections.

My main goal was environment variable setup before launching dotnetcore services. Sadly the incompatible syntaxes for variables and environment:

#powershell syntax
$variable="value"
$env:variable2=$value
#bash syntax
variable=value
export variable2=value 

means very little shared code inside the file, but it really cut down errors a lot just by having them in the same file. Almost-a-single-source-of-truth turned out to be much more reliable than not-at-all a single source of truth.

Bash-then-powershell was simpler than Powershell-then-bash. My state-of-the art is powershell named and validated parameters, which allows tab-completion to work in powershell.

` # \
# PowerShell Param
# every line must end in #\ except last line must end in <#\
# And, you can't use backticks in this section        #\
param( [ValidateSet('A','B')]$tabCompletionWorksHere, #\
       [switch]$andHere                               #\
     )                                               <#\
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `

Repo: github.com/chrisfcarroll/PowerShell-Bash-Dual-Script-Templates

Raw: Powershell-or-bash-with-parameters .

Alternatively, do everything in powershell?

Of course, sensible people would do everything in a single scripting language. But it has been well-worth having the tools for both approaches. Especially for short bootstrap scripts.

For a powershell core everywhere approach, my main adaptation is the shebang header on all .ps1 files:

#! /usr/bin/env pwsh

which tells unix machines to what kind of script it is. Powershell itself ignores it as a comment. Finally, you must also chmod a+x *.ps1 to mark them as executable.

Migrating Net Framework to Netcore

Until NetCore 2 came along, migrating an existing Net Framework project to dotnet core was likely a painful exercise in futility, as you time-consumingly discovered just how many bits of the .Net framework don't exist on netcore 1. Small things, like key parts of AdoNet. It was a bleeding edge experience.

But then there was dotnet core 2 with not-very-far-off 100% Api compatibility. And now all is sweetness and light.

Seriously. It is. Huge chunks of your .Net framework project will now 'just work' on .netcore, with little or no editing. In fact it's so easy, you might consider multi-targeting net framework and netcore just to show off.

Console apps and class libraries are straightforward. Considerations for UI and platform technologies:

  • For AspNet, there is a learning curve from Mvc versions 3/4/5 to AspNetCore for which I will refer you to the tutorials. There is then work to do which I do cover below.
  • For Windows Forms and WPF projects, I recommend you to the considerations in MS guide for porting Winforms.
  • WWF and WCF-serverside are not (currently) migratible, although WCF-clientside is. Moving Web-facing WCF-serverside to dotnetcore with NancyFx or to AspNetCore would be a smallish-rewrite, about the same as moving to WebApi2; but it seems that MS are working on WCF serverside. Remoting, however is gone. So if your preference for WCF is that it's a better style that all this pseudo-resty AspNet nonsense, then consider Nancy.

Overview

  1. Start with a new, empty dotnet core 2 project
  2. Drag-n-drop all your existing code into it, excluding AssemblyInfo.cs
  3. Deal with .settings and .config files
  4. Re-add your NuGet dependencies
  5. Deal with other code differences
  6. Build and Go!

Well okay, that last step, Build-and-Go, is more likely to be Build-and-Fix-The-Next-Compiler-Error-And-Build-Again. But it is mostly straightforward.

To migrate AspNet to AspNetCore there are more steps, and you do have to start with the learning curve for a whole new framework. That said, it's like someone thought, “let's redo Mvc as WebApi2 + Razor + Views but with a cleaner startup style and with mandatory dependency injection”. Your controllers will hardly change. I do find AspNetCore simpler, cleaner, easier to work with. Roughly, your steps are:

  1. Work through the getting started tutorials & learning curve. (Estimate 5-10 hours per person?)
  2. Migrate your startup config to the new approach (2-10 hours depending on how much novel startup code you have)
  3. Migrate any custom authentication to the new approach (An hour or so if you read the gotcha below)
  4. Consider whether your Attribute-based filters will remain as attributes or be re-worked into something else.
  5. Re-tool unit tests which mocked the old Asp.Net Mvc dependencies

Larger sets of projects

If you are dealing with not just a single project but a whole load of them, you should first look through Microsoft's guide to porting. The main reason to not work through those steps for a single project is that since netcore2, the fastest way to analyse “what problems will I have in porting” for a smallish project is to just do it! You can most likely finish the job already, faster than you can use the analysis tools to predict what problems you will have. That wasn't always the case before netcore 2. A couple of thoughts from that guide that I do recommend though:

Start with a new empty dotnet core 2 project.

To migrate an executable you'd create a console app. For a class library, you can make it a netstandard2 project, which makes the project available for use in .net 4.6.1+ / 4.7 as well as in dotnet core.

The command line is very trendy in dotnet core, so you can do it all with dotnet new instead of using a GUI. dotnet new will show what templates are installed on your machine.

Drag-n-drop all your existing code in, excluding AssemblyInfo.cs

dotnet core projects assume, by default, that if there's a code file in the directory or a subdirectory then it's part of the project, so just dragging all your code into the new project directory will just work.

Don't include the AssemblyInfo.cs because that gets auto-generated from the .csproj file. If you have anything of interest in your AssemblyInfo.cs, edit the .csproj file and put it in there. The AssemblyInfo properties section of Additions to the csproj properties for dotnet core show you the Element Names to use if you want to re-add information. Something like:

<PropertyGroup>
<TargetFrameworks>netstandard1.6;net40</TargetFrameworks>
<AssemblyVersion>4.1.4.3</AssemblyVersion>
<AssemblyFileVersion>4.1.4.3</AssemblyFileVersion>
<PackageVersion>4.1.4.3</PackageVersion>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
<Title>TestBase – Rich, fluent assertions and tools for testing with heavyweight dependencies: AspNetCore, AdoNet, HttpClient, AspNet.Mvc, Streams, Logging</Title>
<PackageDescription><![CDATA[*TestBase* gives you a flying start with ....etc...</PackageDescription>

Note the new properties with names beginning with <Package...> which will be picked up by dotnet pack when creating NuGet Packages. Nuget is much easier with dotnet core, it's kind of built-in instead of being an extra thing to learn and do.

Deal with .settings and .config files

There is a whole new approach to settings and configuration. You will have to learn it. It's good though. It lets you do things like this:

{
"AComponentDefaults": {
"SomeSetting": "Me",
"ANumericSetting" : 1.0,
"Subsetting": {
"Something" : "Sub"
},

"JustOneLine" : "This"
}

and then read a whole section as strongly-typed settings with a one-liner:

Configuration.GetSection("AComponentDefaults")
.Bind(myComponent = new AComponent());

You can still use single-line settings of course:

var justOneLine=Configuration["JustOneLine"]

The new system deals easily with per-environment overrides, and has a whole new “get your config from all kinds of other sources than the settings file” capability.

Re-add your NuGet dependencies

This is straightforward. In Visual Studio (or in JetBrains Rider) use right-click -> Manage NuGet Packages. On the command line it's dotnet add package.

The big news here is that most of your NuGet dependencies already work on dotnetcore. All of the most downloaded NuGet packages are either multi-targeted or have packages for each platform. (The dependency trees for most packages for dotnet core is quite different to the dependency tree for net framework, but it makes no difference at all, on the whole).

Deal with other code differences

I don't think there are too many. Under netcore2, your major external dependencies – AdoNet, HttpClient and FileSystem – are all either the same or quite similar. SqlClient, Npgsql, Dapper are pretty much unchanged and the rest of the Framework is very much the same.
Main code changes:

  • Scan the list of breaking changes, which are largely in low-level or platform specific areas.
  • If you use reflection you must often use type.GetTypeInfo().GetXXX() instead of type.GetXXX(). If you're good with regex, this just needs a search-&-replace to fix.
  • EntityFramework Core is different, but not extremely different.

Build and Go!

And … deploy to Macos and Linux. Hurray.

Migrating AspNet to AspNetCore

Work through the getting started tutorials

Really. Don't try to skip the aspnetcore getting started learning curve. Be aware that the tutorials push the new Razor Pages approach, which you will want to ignore. Instead be sure you're clear on how the new approach handles startup, dependency injection, attributes, filters, and authentication. Your controllers and routing will largely work with minimal change.

Migrate your startup config to the new approach

So having done your learning curve, you understand that all your Global.asax.cs and App_Startup code will move into, or be called from, your Startup class. And you will cleanly separate config setup—having learned about the new configuration approach–and you will use a dependency injection container to provide any global config to your controllers.

Fix-up ControllerContext changes

There are some fiddling tidyup changes on ControllerContext and Request properties–for instance userHostAddress is no more, you must look for HttpContext.Connection.RemoteIpAddress instead. Global HttpContext is gone, but of course you were always careful to use controller.HttpContext weren't you?

Add the new interfaces to Attribute-based filters, or else rework them as middleware

You do need to learn about the new kinds of filters, and consider whether what you are doing with your filters should stay as-is in attribute filters or might it be simpler to move logic into the new middleware approach.

Migrate any custom authentication/authorization to the new approach

The mistake to avoid here, is trying to make your custom AuthorizationAttribute work as an AspNetCore attribute. Don't. Instead,

  • Move the logic of your custom AuthorizationAttribute into a Policy, which could be just a single method call.
  • Delete your custom attribute and let the built-in AuthorizeAttribute reference your new policy:
    [Authorize(Policy="MyCustomPolicy")]

It would have saved me half a day if I'd realised this up-front. But on this plan, you can migrate custom authenticate in an hour or even minutes.

Re-tool your unit test controller dependencies for the new framework

There is some popular code on the web for mocking the dependencies of an Mvc 3 or 4 or 5 Controller.ControllerContext. This must all be replaced.

Myself, for Mvc 4 & 5 I always used TestBase-Mvc which gave me two simple extension methods:

var unitUnderTest= new MyMvcController(...)
.WithHttpContextAndRoutes();

var webApiControllerUnderTest= new MyWebApiController(...)
.WithWebApiHttpContext<T>(HttpMethod httpMethod,
[Optional] string requestUri,
[Optional] string routeTemplate);

//Or, optional parameters to process the actual route urls from your RegisterRoutes config:

controllerUnderTest
.WithHttpContextAndRoutes(
[Optional] Action<RouteCollection> mvcApplicationRoutesRegistration,
[optional] string requestUrl,
[Optional] string query = "",
[Optional] string appVirtualPath = "/",
[Optional] HttpApplication applicationInstance)

This makes sure a controller can reference cookies, session, TempData, the Url.Action() calls and even the global HttpContext.Current in a unit test context.

For AspNetCore, I wrote TestBase.Mvc.AspNetCore (soon to be renamed to to TestBase.AspNetCore) which offers a similar thing:

var uut = new ControllerUnderTest().WithControllerContext();
uut.Url.Action("a", "b").ShouldEqual("/b/a");
uut.ControllerContext.ShouldNotBeNull();
uut.HttpContext.ShouldBe(uut.ControllerContext.HttpContext);
uut.Request.ShouldNotBeNull();
uut.ViewData.ShouldNotBeNull();
uut.TempData.ShouldNotBeNull();
uut.MyAction(param)
.ShouldBeViewResult()
.ShouldHaveModel<YouSaidViewModel>()
.YouSaid.ShouldBe(param);

It also has a large set of fluent assertions for ViewResults, FileResults, etc, etc. Once I'd written the new infrastructure, migrating my controller unit tests was mostly painless. (Nb it still needs a few changes for CompatibityVersion_2_2, it's currently written for 2.0.)

New in AspNetCore is the ease of testing not just individual controllers but the whole hosted application. The AspNetCore team coded a TestServer for their unit tests, and this server can be used, bootstrapped with your actual application's Startup code, and then tested with an HttpClient:

[TestFixture]
public class WhenTestingControllersUsingAspNetCoreTestTestServer : HostedMvcTestFixtureBase
{

[TestCase(""/dummy/action?id={id}"")]
public async Task Get_Should_ReturnActionResult(string url)
{
var id=Guid.NewGuid();
var httpClient=GivenClientForRunningServer<Startup>();
GivenRequestHeaders(httpClient, ""CustomHeader"", ""HeaderValue1"");

var result= await httpClient.GetAsync(url.Formatz(new {id}));

result
.ShouldBe_200Ok()
.Content.ReadAsStringAsync().Result
.ShouldBe(""Content"");
}

But I have come round to seeing this as automated integration testing, not unit testing. I would use it for testing e.g. content negotiation is working as expected, not for testing the domain logic of a controller action.

Conclusion

Since the arrival of netcore2, the cost of migrating to DotNetCore is dramatically lower. DotNetCore tooling and extensibility is very good. Even migrating AspNet is not an excessive task. Even for just NetFramework 4 development, the new tooling is simpler and better. I reckon that dotnetcore is cheaper and easier to write and maintain. Both C# and the framework are evolving in ways that reduce your cost of development: And you get cross-platform deployment pretty much for free. At last.