non-nullable reference types (the one billion $ mistake)

Topics: C# Language Design
Apr 6, 2014 at 3:21 AM
Edited Aug 7, 2014 at 12:26 AM
NOTE: Since this conversation has become really log, I've added a compilation of the current state of my proposal
here https://gist.github.com/olmobrutall/31d2abafe0b21b017d56


If there's one important problem in C# is not having non-nullable reference types.

In my experience using C# 1.0 there where two main exceptions, NullReferenceExceptions and CastExceptions. When Generics where introduced in .Net 2.0 CastException almost disappeared and the code was easy to read because there was more information. A great day.

Getting rid of NullReferenceExceptions looks harder:

Syntax

Perfect solution: In a perfect world, the ? symbol would be used for both value types AND reference types. Introducing it right now will braking almost every single line of C# written.

Compromise: We have to conform ourselves with the asymmetrical solution:
int a;  //non-nullable value type
int? a; //nullable value type
string a; //nullable reference type
string! a; //non-nullable reference type
At least there's an evident and intuitive alternative. Also the ! is thin, witch is good because this thing is gong to be everywhere!
Dictionary<string,List<Country>>
Dictionary<string!,List<Country!>!>!

Run-time guarantees

Perfect solution:

C# is a strongly typed language. When a variable is of type a, the value is of type a, ALWAYS.
Preserving this guarantees for non-nullable reference types is really hard. Here are two great posts explaining all the possible things that can go wrong:

http://twistedoakstudios.com/blog/Post330_non-nullable-types-vs-c-fixing-the-billion-dollar-mistake
http://blog.coverity.com/2013/11/20/c-non-nullable-reference-types/#.U0CsMPmSxEw

Even if the language and the runtime will be designed from scratch, if will present a challenge for a general-purpose language like C#.

Because there's no default value for string! (or, even better, Person!) there will be no reasonable way to create a string![].

Compromise:

Let´s create a standart Non-Nullable reference type, just like the one from Jon Skeet.

http://msmvps.com/blogs/jon_skeet/archive/2008/10/06/non-nullable-reference-types.aspx

And add some language features on top of it:

There's a implicit conversion from T to T!, both being reference types. If T is null, it throws an ArgumentNullException. This will allow T! to widespread anywhere where it make sense, without breaking code and saving some keystrokes validating the entity.

Of course there's also an implicit conversion from T! to T.

When the value is read, if its null (because one of the problems explained in the blog posts) it throws an InvalidNonNullableException. The developer is responsible of this not happening.

For example, accessing a non-initialized field of a string![] will throw an exception, but List<string!> should be safe. In some sense, arrays will become unsafe code in regard to nullability.

The same can be said to any other thing that can go wrong, the developer is responsible for initializing all the fields for example.

While there should always be a check on setting the value, maybe the check when reading the value could be removed on Release compilations.

Additionally, all the assignments from T! to T! can be made without any check at all.

So far we have a good compromise solution in my opinion, but I can think of a problem:

My rules will check this assignment at run time:
string a;
string! b = a; //exception at runtime 
and of course this one
string[] a;
string[]! b = a; //exception at runtime 
but what about this one
string[] a;
string![] b = a; //exception at run-time 
The fundamental problem is that, because we have implemented it as an struct, a new object will have to be created. Maybe even complex deep copies for thing like Dictionary<string!, Person!>.

So, let's remove the struct altogether and make it just a compile time illusion, like generics in java.
string![] str = new string![5]; 
str.GetType(); //returns string[]
For reflection purposes, I will also add a [NotNull] attribute on every method argument and type member of type T!, but I don't think there's a a good way to add the attribute to inner type parameters of generic types (or arrays).

Conclusion

It is a beautiful solution like C# generics? No; it's a broken solution like Java generics. But java is better with their generics than with no generics at all, and, after 12 years, nobody has come up with a perfect solution.

Hey, let's solve an 80% of a one billion mistake, it's worth!
Apr 6, 2014 at 5:38 AM
The problem with Non-Nullable reference types lays at more fundamental level, the CLR. The CLR needs to be modified to be aware of them.

What would about
myClass! x = Default<myClass!>
How would we define the default value of a non-nullable class?
Prior to the constructor the variable would be null, Which is invalid for a non-nullable.
Apr 6, 2014 at 9:05 AM
Edited Apr 6, 2014 at 9:25 AM
My proposal is backwards compatible and does not require any CLR change.

A variable of type string! gets compiled to just string but with the exception that C# can use this information to introduce automatic checks when reading and writing this variable.

The default value of string! is a broken reference that will throw exception when readed.

That's why arrays are unsafe (but lists are not), and the developer is responsible of initializing all non-nullable fields with non-nullable values.

Finally, I've come with a way of encoding more information in the NonNull attribute:
public Dictionary<string!,string>! MyDictionary{ get{....} }
Could get compiled to:
[NotNull("0,1")
public Dictionary<string,string> MyDictionary{ get{....} }
As you see the result erases all T! information to preserve backwards compatibility and compatibility with other languages but the attribute is able to preserve this information.

The values "0,1" in the attribute represent the zero-based indexes of the types and type parameters in depth first order that have been marked with !.

So in this example, it says that the dictionary (0) and the first type parameter (1, the key) are non-nullable, but the second one (2, the value) remains nullable.

Another alternative will be to use bit fields to encode this information in a long attrbute.

This way C# can publish this information to reflection API and other languages (VB, F#...) without breking IL compatibility.
Apr 6, 2014 at 9:51 AM
Edited Apr 6, 2014 at 10:06 AM
Where the checks should be made?

In my initial approach, i was expecting to introduce the checks only when there's a conversion from T! to T or the other way around.

So this code:
class Person
{
     string! name; 
     public string! Name
     {
          get{ return name; }
          set{ name = value; } 
     }
     
     public Person(){      }

     public Person(string! name){
         this.name = name;
     }
}

string name = "Peter";
Person p = new Person(name);

string name2 = "John";
p.Name = name2;

string name3 = p.Name; 
Should be compiled to:
class Person
{
     [NotNull("0")]
     string name;
     [NotNull("0")] 
     public string Name
     {
          get{ return name; }
          set{ name = value; } 
     }
     
     public Person(){      }

     public Person([NotNull("0")]string name){
            this.name = name;
     }
}

string name1 = "Peter";

if(name1 == null)
   throw ArgumentNullException("name parameter")
Person p = new Person(name1);


string name2 = "John";

if(name2 == null)
   throw ArgumentNullException("Name property")
p.Name = name2;

var _name = p.Name;
if(_name == null)
    throw InvalidNonNullableException("Name property"); //Remove this check on Release builds? 
string name3 = ; 
Notice that the default constructor does create any invalid state. Any compiler help making a definitely assignment will help but, in the complicated cases, its the developer responsibility to set the property.

Also, notice how the validations where made outside of the class, and not inside.

After thinking about compatibility with other languages, it's clear that this is not a good idea, because they may not be aware of the non-nullable feature and not make any check at all.

So, all the in parameters of public members (constructors, method, properties...) should be checked inside of the class:
class Person
{
     [NotNull("0")]
     string name;
     [NotNull("0")] 
     public string Name
     {
          get
          { 
              if(name == null)
                   throw ArgumentNullException("name")
              return name;
           }
          set{ name = value; } 
     }
     
     public Person(){      }

     public Person([NotNull("0")]string name){
          if(name == null)
               throw ArgumentNullException("name");
 
            this.name = name;
     }
}

string name1 = "Peter";
Person p = new Person(name1);

string name2 = "John";
p.Name = name2;

var _name = p.Name;
if(_name == null)
    throw InvalidNonNullableException("Name property"); //Remove this check on Release builds? 
string name3 = ; 
This way more guarantees are preserved on common scenarios when the library is consumed by other languages. The (optional) read check stays outside however.

In more complicated scenarios, where its not guaranteed that the check will be made inside of the class, the check should be made outside. Like when the non-nullable reference type its a type argument of a unconstrained generic type:
List<string!> list = new List<string!>(); 
string name = "Peter";
list.Add(name);
Should get compiled to:
List<string> list = new List<string>(); 
string name = "Peter";
if(name == null)
    throw new ArgumentNullException("value argument"); 
list.Add(name);
Because there are no guarantees of any check inside of the method.
Apr 6, 2014 at 10:59 AM
I'll ask again, using your example of the reference type Person
Suppose I want Person to be a Non-Nullable.

Person! x = New Person!()  ' Hasn't got a construct that takes args,
x.Name = "Peter"
Would that mean the non-null reference types that contain parameters that are also reference types, require those parameter to also be non-null?

Assuming Non-Null references
class Person!
{
    public property string! Name;
       
    public Person(){      }

     public Person(string! name)
     {
       this.name = name;
     }
}
How is the backing field for the property Name initialised and to what value? String.Empty?

What about this situation?
Public Class example 
  ' For scoping reason need to accessible at the class level.
  Private y As Person ' Valid  y = Default<T> / Nothing
  Private x As Person!  'Required to be non-null but creating a instance here isn't valid;
  
 Public Sub P( Name AS String! )
   x = New Person( Name )  ' Creating an instance 
 End Sub
 Rem ...
End Class
Would it require non-null reference type to have a static constructor? which instantiates an instance of that class to be used the default<T!> instance?
A bit like String.Empty.
class Person!
{
   private static Person! _NonNullInstance_;
   static Person()
   {
    this._NonNullInstance_ = new Person( String.Empty );
   }
    public property string! Name;
       
    public Person(){      }

     public Person(string! name)
     {
       this.name = name;
     }
}
What about List<T>! (List<T> is also a reference type.)

Would it also require a new BCL with non-null versions the reference types that already exist? eg String!
What about situations where I as a developer don't control or even have access the code I'm using (eg some external library),
but in my program I require a non-null reference of a reference type from that library?
Or are you suggesting automagically "inserting" null checks into their code at compile time? As this would invalidate trusted code, as the hash wouldn't match.
Apr 6, 2014 at 11:43 AM
Hi Adam,

I think I haven't properly explain my idea.

My proposal is just a compile time trick, it's complexly broken, easy to cheat, and the types are not accessible throw reflection. Just like java Generics.

Still, I think its useful, reduce boiler plate code and provide a (estimated) 80% guarantee that a string! is not a null.
Would that mean the non-null reference types that contain parameters that are also reference types, require those parameter to also be non-null?
Nope, its up to Person implementation to determine whether Name is string or string!, and if it goes for string! it's the developer responsibility to guarantee this contract. In many easy cases the compiler could help though.
How is the backing field for the property Name initialised and to what value? String.Empty?
The Name field is initialized to..... wait for it... null. So the developer that implemented Person is braking the contract but not the compiler. He should be responsible of assigning name=""; or name = "unknown"; in the default constructor. In this easy case the compiler could help, but not in others.
What about this situation?
I don't fully understand this scenario but the compiler should try to prevent string! to be non-null at compile time any time he can, but it will be not able to do it always. Then the type system is too restrictive you can always rely on (implicit) casting:
string name = null;
string! Name 
{
   get
   {
       if(name == null)
           name = "specify name please";
       return (string!)name; //casting could be implicit also
   }
}
Would it require non-null reference type to have a static constructor? which instantiates an instance of that class to be used the default<T!> instance?
There are no declarations of non-nullable clases, just as you don't declare nullable structs, you just use them. Also there is no need to add the New Person!() in the constructor. This code should be valid:
Person! person = new Person()
Would it also require a new BCL with non-null versions the reference types that already exist?
It will require the BCL to be updated, but not by declaring non-nullable types, but non-nullable arguments / properties / return types / indexers / etc...

The new BCL will still be compatible with languages that do not support this feature, because at runtime / IL metadata, its just a bunch of Attributes here and there. So it won't be a breaking change like changing ArrayList for List<T>.
Apr 6, 2014 at 11:52 AM
Edited Apr 6, 2014 at 8:09 PM

Compile-time checking

So far, because there's implicit conversion from T to T! and the other way arround. There is no compile time checking!.

This is by intent to preserve clean backwards compatibility. So this code will compile without errors:
string comma = ",";
"hi, darling".Split(comma); 
Even if Split has been updated to take a
public string![]! Split(string! separator)
But a warning could be created that protects any conversion from T to T!, so we will have to fix our code like this:
string comma = ",";
"hi, darling".Split((string!)comma); 
Or
string! comma = ",";
"hi, darling".Split(comma); 
Or even better
var comma = ",";
"hi, darling".Split(comma); 
This warning could be opt-in (or opt-out?) per project and convert it to an error using 'Compile Warning as Error' feature.
Apr 6, 2014 at 1:57 PM
There is a situation where you can't declare a variable and instantiate it at the same time.
I think it involve Dim WithEvents in VB.net, for the life of I can't remember what it was.

So you have to declare the variable, then instantiate via a method, like the constructor (or Form_Load).
Apr 6, 2014 at 8:22 PM
I don't know the generated code of WithEvents but shouldn't be an isue.

There's no need to create and instantiate the variable at the same time. You can just let the variable be nullable and cast it to not nullable in the external api.

Just as in:

class Form {
   withevents ComboBox combo= null;   //nullable backing field, with withevents attribute as if it existed in C#
   ComboBox! Combo
   {
     get 
     {     
        return (ComboBox!)combo; //casting could be implicit also 
     }
  }

  Form(){
    combo = new ComboBox();
  }
}

Delegates and events:

One interesting thing however are delegates:

For the delegates that represent events, a nullable delegate field/event should be used:
public EventHandler Click;  
But for public delegate fields that have to be implemented by just one method, represent extension points that can be replaced and usually return a value, a non-nullable delegate type could be a better option.
public static Func<Invoice!, string!>! GenerateInvoiceId = invoice => Guid.New().ToString(); //Default implementation, replace it if needed
Coordinator
Apr 8, 2014 at 1:33 AM
Introducing NotNullable is a much gnarlier issue than introducing Nullable. (seems like symmetrical to Nullable, so what is the deal...)

When one starts with a value type (say byte) you have [0, byte.MaxValue] range of possible values.
If you want to add another <null> value, you can simply create a tuple {byte value, bool hasValue} that augments the byte with additional state. That is basically how Nullable<T> works.
The rest ( "?" synax, operator lifting with null propagation) is just sugar on top of Nullable<T>. The important part is that adding extra state is completely expressible in .Net metadata.

Non-nullable would essentially require removing the <null> state that any reference type variable otherwise has. And perhaps, by generalizing, removal of default<T> as a valid state too.
That, indeed, cannot be done without fundamental changes in CLR.

Typically when non-null values are desired it comes down to scenarios that look roughly like
    public int NameLength(nevernull string name)
    {
        // this will never throw since name is never null
        return name.Length;
    }
Unfortunately removal of the state cannot be done via composition. You may try introducing "struct NotNull<T>" wrapper to represent a "safe to access" reference.
    public int NameLength(NotNull<string> name)
    {
        // this will never throw since it is never null
        return name.Reference.Length;
    }
but the null state was not really removed and the above code will work only as long as callers will not try passing default(NotNull<string>).
Type system will not help here since default(NotNull<string>) is generally valid and since it is the value of uninitialized fields and array elements you do not need to do anything special to get it.

That is pretty inconvenient. As long as there are holes through which null/defaults can get into the system you will have to write something like
    int NameLength(NotNull<string> name)
    {
        var shouldNotBeNull = name.Reference;
        Debug.Assert(shouldNotBeNull != null, "have we not agreed to never pass defaults here?");

        return shouldNotBeNull.Length;
    }
Well, that is not much different than:
    int NameLength(string name)
    {
        var shouldNotBeNull = name;
        Debug.Assert(shouldNotBeNull != null, "have we not agreed to never pass nulls here?");

        return shouldNotBeNull.Length;
    }
At this point the attempt to implement notnull through wrapping does not seem to solve the problem, it just turns it around a bit. You still have nulls and you still have to deal with them in roughly the same way as before.

Once we agree that the notnullable cannot be expressed precisely, there is still a room for a "best effort" feature that involve decorating variables somehow as never nulls (attributes perhaps) and perform additional analysis that finds patterns that could lead to violations of non-nullability.

Historically, such features were the domain of tools (FXCop, CodeContracts, ... ) and not the language itself. Perhaps the pluggable nature of compiler diagnostics in Roslyn is another avenue to explore here.
Apr 8, 2014 at 9:45 AM
Hi VSadov, good that the proposal catched the attention of someone in the C# team!

Your main objections deal with implementing it as a struct wrapper (´NotNull<string>´). I'm fully aware of the limitations, and that's why my purpose, even if started with such implementation in order to introduce it, it falls back to a fully Attribute-based solution at the end:
The fundamental problem is that, because we have implemented it as an struct, a new object will have to be created. Maybe even complex deep copies for thing like Dictionary<string!, Person!>.
So, let's remove the struct altogether and make it just a compile time illusion, like generics in java.
My implementation knows that the value of default(string!) is null, and is an invalid state. I'm also aware that we can not reach 100% type-safe protection at compile time, and when it comes to inter-operate with other language that have not implemented the feature, 0% compile-time protection is what we get. So automatic run-time checks are added.

My solution is also backwards compatible, with a smooth ramp from the current state (nulls are producing run-time exceptions everywhere but code compiles) to a future where most of them could be cached at compile time.

Let me explain the idea once more from a different angle:

The two blog post that I've read about the issue

http://twistedoakstudios.com/blog/Post330_non-nullable-types-vs-c-fixing-the-billion-dollar-mistake
http://blog.coverity.com/2013/11/20/c-non-nullable-reference-types/#.U0CsMPmSxEw

explain what makes the feature really hard to implement perfectly. They are searching for something like .Net 2.0 Generics, and they conclude that it's hard (impossible?).

My approach is different: I just look for how far we can get trying to solve the problem with broken features:

Step 1. Add ! anotations that do nothing.

We could implement a C# compiler where adding ! at the end of any reference type (or reference type parameter) will have not effect at all. It will create no new error or warnings, no new compiled code will be emitted but will be easily written and will be shown integrated in the type signature of the IntelliSense tooltips.

Think of it as just documentation. Just this feature alone will be worth. Will remove a lot of uncertainty about whether null is a valid value in a method without reading the documentation (no body does in most of the cases).

So your code will look like this:
    int NameLength(string! name)
    {
        var shouldNotBeNull = name;
        Debug.Assert(shouldNotBeNull != null, "have we not agreed to never pass nulls here?");

        return shouldNotBeNull.Length;
    }
We could do that? Yes, let's add that!

Step 2. Multi-language / assembly level support using Attributes

The problem with Step 1 is that the annotations disappear when you compile the code and use it as a dll, from C# or any other language that could support the feature in the future.

How could we retain this information? Adding attributes on all fields, properties, method return types and parameters, etc..

Because the types of all this members could be generic types, some way of encoding this information will be necessary. I've proposed comma-separated indexes (or bit fields) of the position of the non-nullable generic parameter in depth-first order, but there are other options.

This way, we could read this sort of documentation in our code, and also in the framework itself or third party libraries.

So
  int NameLength(string! name)
  {
        
        return name.Length;
  }
Will get compiled to
  int NameLength([NotNull("0")]string name)
  {
        return name.Length;
  }
We could do that? Yes! Let's do it!

Step 3. Add some automatic run-time checks

Every property or method that takes a string! should automatically check the parameters for non-nullability at the beginning of the method/property. The method could be called from a language that does not support the feature, so you never know.

So
  int NameLength(string! name)
  {
        
        return name.Length;
  }
Will get compiled to
  int NameLength(string! name)
  {
        if(name == null)
             throw ArgumentNullException("name");

        return name.Length;
  }
Look how, since the compiler just checks the input parameters, it can generate a nice ArgumentNullException just as you would have written yourself.

This check is done just if the Type parameter is not null, not for the internal structure of generic parameters.


So
  int Method(List<string!>! names)
  {
        return name.Length;
  }
Will get compiled to
   int Method([NotNullable("0,1")]List<string!>! names)
  {
        if(names== null)
            throw ArgumentNullException("names");

        return name.Length;
  }
But not to
   int Method([NotNullable("0,1")]List<string!>! names)
  {
        if(names== null)
            throw ArgumentNullException("names");
       
        //for(var n in names)
        //     if(n == null)
        //          throw ArgumentNullException("names element");

        return name.Length;
  }
Because this will be too expensive and also impossible to implement in the general case.

Additionally, a similar check could be added just before every return, on methods (or properties) that returns a non-nullable reference type, at least when compiled in Debug mode.

So this code:
  string! Join(List<string!>! names)
  {
        if(names== null)
            throw ArgumentNullException("names");

        return name.Length;
  }
Could compile, in Debug mode at least, to:
  string! JoinComma([NotNull]List<string!>! names)
  {
        if(names== null)
            throw ArgumentNullException("names");

        var result = string.Join(", ", name);

       if(result == null)
            throw InvalidNotNullException("result");
       
      return result;
  }
For high-performance methods that want to use the feature, a [AvoidNotNullRuntimeChecks] attribute could be added at the method / class level.

We could do that? Absolutely!

Step 4. Add some optional compile-time checking on top of it

We could add an optional warning every time an implicit casting from T to T! is made. A new version of the BCL, with plenty ! annotations here an there will generate many warnings on an old piece of code that does't use the feature, so it should be easily disabled at the project level.
string s = "hola"
NameLength(s); //warning: implicit conversion from string to string!
string s = "hola"
NameLength((string!)s); // -> warning disappears
string! s = "hola"
NameLength(s); // -> warning also disappears, and you're one step closer to heaven. 
A project with very few technical debt that has already passed this stage could make this warning a compile-time error at the project level or, maybe, just setting 'Compile warnings as errors'.

But there will be many cases where the compiler could not help. For example this should be legal:
string![]! s = new string![100]; //just one ! needed in the array constructor
s[0].Length // InvalidNotNullException is raised at runtime 
Another case is non-initialized fields in a constructor. But maybe there the compiler could help more. It's already making similar things for struct because currently:
    public struct Bla
    {
        public int a;
        public int b;

        public Bla(int a, int b)
        {
            this.a = a; 
        }
    }
}
Produces:
Error 1 Field 'Bla.b' must be fully assigned before control is returned to the caller

So it shouldn't be that hard to make
    public class Person
    {
        public string! name;
    }
}
generate:

Warning: name is a non-nullable string but it's not fully assigned in every constructor.

This will solve the problem.
    public class Person
    {
        public string! name = "";
    }
}

Conclussion

It's a broken feature, some null values will skip the compile-time checkings. Maybe some will even skip run-time checkings (at least in Release mode) for performance reasons, but we will be able to catch most of the errors of a billion dollar problem at compile time, or at least earlier in stack trace. Lots of saving.

Nobody will win a Nobel price for this, and there will be people criticizing the feature for being broken, but they won't have anything better.

And we can add more warnings / errors / runtime-checks for the complicated cases in the future anyway.
Apr 8, 2014 at 9:15 PM
I really really liked the solution presented in the blog entry you linked to:

http://twistedoakstudios.com/blog/Post330_non-nullable-types-vs-c-fixing-the-billion-dollar-mistake

The author tries very hard to maintain backwards compatibility, and achieves this with relatively minimal changes to the language.

While it does require changes to the CLR, I feel that this is the best possible way forward. Anything short of this — such as the pared-down version presented by Olmo — would give us a feature about as broken as Java generics, with all its downsides and problems. It would be nice if that could be avoided.

In particular, if I were analyzing an assembly using Reflection, I would expect non-nullable types to be represented “properly” by the API just like generic types, array types, pointer types etc. currently are. Nullable value types are currently represented as the generic instantiation Nullable<T>, which is perfectly acceptable; but getting a type object that represents (say) string[] and being completely unable to determine whether this is actually string[], string![], string[]! or string![]! would be exactly the reason the Java implementation of generics is generally poorly received.
Apr 8, 2014 at 10:30 PM
Edited Apr 8, 2014 at 10:35 PM
The solution from Craig Gidney is a really nice solution from a technical point of view, it tries to predict a lot of changes to make a solid language solution

There are some corner cases however, like when he says:
A few existing compiler errors, like disallowing constraining a generic parameter by a sealed type, need to be removed or rethought (because T! is a T, even if T is sealed).
And also other problems, like the ones Erik Lippets explains about the initialization order of constructors, destruction, etc... it could work.

But let's pretend that we find a solution that is not too restrictive for all of those problems....

However, this solution is completely naive in practice. The solution, not the author, as he stands at the end:
Adding non-null types to C# is do-able, but not simple and not cheap. I’m sure it overcomes the features start at -100 points threshold, but that’s before considering the implementation costs. Even if the feature was already implemented in the language, there are mountains of existing classes that need to be updated.
We may never see non-null types in C#, but I hope we do.

The real problem

Even if the solution is 100% backwards compatible from a language standpoint (the new compiler will be able to compile the old code) the solution is almost 0% backwards compatible from for the updated API.

Think about it, generics where invented in 2005, C# was like 3 years old. The amount of code written in C# since them has, at least multiplied by 10x.

Also, generics require that you updated your collections. How many lines of code could would break when you remove all the ArrayList and change them for List<T>?, similar for Dictionaries, etc... 10% at most.

Now let's see how much lines will change if you make non-nullable all the properties, method arguments and return types that theoretically should be not-nullable?
At least 90%.

So, a gross estimation is that, globally, we are dealing with a problem that is 100x times bigger than when .Net generics where introduced.

Now let's see what's Microsoft example.

How many API where changed in the .Net Framework after generics where introduced?

I would say 0%. The .Net Framework is full of lousy-typed APIs pre .Net 2.0: All the reflection API, WinForms, WebForms, Regex,.... pretty much everithing but LINQ and some new things.

Even WPF, that was invented in .Net 3.0, completely ignores generics because was started as a project before they existed, and then nobody changed them before releasing.

With an implementation like Craig Gidney proposes, every ! means a breaking change for the API consumers, so the framework will never be updated.

If the framework is never updated nobody will, because all the interactions with the framework will return lousy typed nullable types that you will need to cast. Is like if all the method in the framework where returning object and you wanted to work with types.

The code transformations, as Craig Gidney, would need to be modified by hand, and he is speaking of nothing less than all the .Net code written.

With my implementation, however, adding the necessary ! is not a braking change but a warning. Microsoft could then start annotating all the framework and the consumers could choose to disable the warning, keep it as a warning, or compile warning as errors.

Maybe in 10 years from now, enough code will be changed that a 100% safe and strict solution could be used. Then it won't be that hard to change the implementation of the compiler since both solutions use a similar syntax. Just adding some ! anotations on generics and withdefault(T) here and there could be enough... but until this day arrives we need a transitional solution.

To put it clear, how many Visual Studios 2015 will Microsoft sell with Craig Gidney solution?
Apr 8, 2014 at 10:39 PM
Also, if we become crazy and go for a 100% safe solution, let's just assume that T is non-nullable and you have to decorate T? even for reference types

Probably this will mean less code changes since most of the reference type should be non-nullable anyway, and the end solution will be better.
Apr 8, 2014 at 11:54 PM
@Olmo: well, your last proposal means that suddenly all the previously written code will become invalid! Not the change I'd like to encounter in a programming language I'm working with.
Apr 9, 2014 at 12:16 AM
Edited Apr 9, 2014 at 12:22 AM
I know, that's why I started with 'if we become crazy'.

I see three options:
  1. Olmo: Reasonable, backwards compatible in the language and the updated API but flawed solution based in attributes and warnings. I've explained it in all the long comments.
  2. Craig: Solid language solution, backwards compatible in the language, but not the updated API. The logical conclusion is that no API will ever update because the framework won't create breaking changes. A loose of resources for the c# team. If the framework will be changed, 90% of the c# lines ever written should change before using the new framework version, so no one will buy Visual Studio.
  3. CRAZY: We re-interpret all the reference types as not nullable, make a compiler the way it should have been done: making nullability and reference/value semantics independent concepts, breaking code without contemplations. 40-50% of the c# lines ever written will need to change. Nobody will buy Visual Studio but the c# team will have a honorific mention in Lambda the Ultimate.
Even if I'm quite a risky person (sometimes I even forget to brush my teeth) I will go for #1, and slowly move to #2 in 5 or 10 years.
Apr 9, 2014 at 8:50 AM
Edited Apr 9, 2014 at 9:05 AM

Run-time checks

I'll try to elaborate a little bit more on the run-time checks that should be made in my preferred alternative (Olmo).

Basically there are two orthogonal dimensions, input vs output parameters, and explicit ! vs open generic arguments that could be !

Explicit ! anotation:

  1. Non-nullable reference input parameters (methods, property setters, event adders/removers, indexer setters and constructors):
    An ArgumentNullException should be added just at the beginning of the member.
public bool IsPalindrome(string! str)
{
     if(str == null)   
         throw new ArgumentNullException("str");   //Automatically generated
}
The check could be restricted to non-private methods only, but I think that will make it more confusing. I'll prefer on all methods.

Since the developer of the method has changed the code (adding the !), he is well aware of the new behavior, and possibly he has removed the manual null check written before.
  1. Non-nullable reference output parameters (methods return, method arguments outs and refs, property getters, indexer getters):
    An InvalidNonNullableException should be added before loosing control of the stack frame. Only in release mode?
public string ToString()
{
     var result = this.FirstName + " " + this.LastName

     if(result == null)   
         throw new InvalidNonNullableException("result");   //Automatically generated
}

Since the parameter is non generic, there shouldn't be big problems concerning TryParse on reference types.

That wouldn't work:
public bool TryParse(string str, out ExpressionTree! tree)
{
}
becaue when it fails you have to return a null value on tree, but just removing the ! is enough
public bool TryParse(string str, out ExpressionTree tree)
{
}
Or even better:
public ExpressionTree TryParse(string str)
{
}

Open generic arguments:

I see basically three strategies here:
  • Conservative: Just as Craig proposed, only the generic classes that have been annotated with ! could be used with non-nullable reference types. Even if the compiler is not able to assert it, will be good that the library developer could ensure that the collection is null-safe for type parameter T, meaning that he is not going to return default(T). Just one copy of the IL will be necessary and no run-time checks should be then made inside or outside the class.
  • Bipolar: The CLR creates to copies for each generic type, one for nullables and one for non-nullables, so List<string> and List<ComboBox> will share the same IL, but List<string!> and List<ComboBox!> will have a different one that includes all the internal checks with the same semantics as expressed in Explicit ! anotation.
  • Aggressive: We cross our fingers and pretend that, with no code changes in the generic type, it will work with non-nullable types. My intuition tells me that List<T> should work with List<T!> with no code changes, not sure about Dictionary<K, V>. Some opinions?.
    Just one copy of the IL will be necessary but run-time checks should be added outside the class or method, effectively moving the checks to the client code.
This is a complicated case:

Dictionary<string!, Person!>! dic = new Dictionary<string!, Person!>(); 

Person! person; //Problem n° 1, invalid state, don't use it or it will break

bool result = dic.TryGetValue("john", out person);
//Problem n° 2, if the client code test is made, then you can not test for result
if(result)
{
   Console.WriteLine(person.Name); // Save use
}
else
{
    Console.WriteLine(person.Name); // Runtime exception
}

We can live with Problem 1, and with declaration expression will be more beautiful, but for Problem 2 we will need to mark the person variable in some way so the test are made lazy.

The main problem is not situations where a NullReferenceException is thrown, I've already surrender that this will happen sometimes, this feature just tries to minimize them.

Instead, what worries me is situations where NullReferenceException is the only thing that can happen.

Finally, I haven't considered ref parameters, but any solution that works for input and output should be applied, in combination, to ref parameters

I would love to hear your opinions!
Apr 11, 2014 at 12:41 AM
What about this situation? polymorphic function and constrained generics.
/* T is _IComparable<T> */
public static class Exts
{
 public static bool IsBetween < T >          ( T v, T x, T y )
 public static bool IsBetween < T : class  > ( T v, T x, T y )
 public static bool IsBetween < T : struct > ( T v, T x, T y )
 public static bool IsBetween < T? >         ( T v, T x, T y ) /* nullable<T> */
 public static bool IsBetween < T! >         ( T v, T x, T y ) /* non-null ref */
}
What get invoked?
Apr 11, 2014 at 6:16 AM
Edited Apr 11, 2014 at 6:47 AM
I do think that fixing the billion dollar mistake would instantly make C# a significantly better language, causing fewer bugs etc. I have some tweaks to the things proposed here but it's close. The main thing is that I don't think this should be asymmetric since that would be hideously ugly and confusing (while basically undermining the whole feature, since people will keep using the "standard" reference even when they shouldn't). I do think that you still need the T! for transitioning projects over to the new style of doing things.

So a "string" would be non-nullable by default, but you have a way of reverting to the old style behavior project wide for legacy code so you can still upgrade to C# 7 or whatever and opt out of this one code-breaking change with a flag that will be supported indefinitely. You can then slowly transition to non-nulls everywhere (static checking will tell you where you need to make changes), or stick to the "classic" behavior forever. The C# project upgrade wizard will set the opt-out flag for you when upgrading from an earlier version.

So:
  1. T? is nullable, T! is non-nullable, and T defaults to one of the two based on a project setting
  2. Thus, legacy code can be modified even in the presence of non-nullable types, by explicitly using T!. So if you have old code that largely acts the same way as C# of old, but you still have to update it and a new library you're using have non-nulls everywhere, they would show up as T! from within your legacy project.
  3. New code can use T? for nullables, but never use T!. They still can use T! it's just a no-op.
  4. Absolutely no implicit conversions from T! to T?. That's nuts and causes overload-ambiguities etc. Require an explicit cast. Implicit conversion from T! to T? is of course absolutely fine, and means all existing library code will continue to work without a bunch of extra noise.
  5. No CLR changes. This is purely a C# surface level change. There's a special type called NonNullable that can only be constructed using special syntax (in C#) that absolutely prohibits creating a non-nullable type that is really null under the covers.
  6. So other languages could be free to create this type with "default" and break your C# program by introducing nulls inside a non-null. Tough luck. But as long as you stay in C#, or language respecting the restrictions for this special type, you're good.
  7. There's some fiddlyness with constructors. Perhaps disallow calling any methods in a constructor until all non-nulls have been assigned?
  8. Data flow analysis would be nice. So any code of the form "if (x != null) { .... x .... }" the x will have a non-nullable type inside the body. This would make upgrading code much easier since many types will already have been converted to T! by prior null checks.
You might dislike having a giant switch that basically changes the flavor of the language, but IMO given that this is probably the most glaring language design mistake in C# having a slight inconvenience is totally worth it. The alternative is to never fix it which is deeply unappealing to me.

It might be instructive to look at how Eiffel fixed this. Just to get an idea of what kinds of corner cases come up when retro-actively adding "null safety" to a language that wasn't designed with it from the start (Eiffel calls it "void safety")
Apr 11, 2014 at 7:19 AM
AdamSpeight2008:

C# overloading system is already not expressive enough for such a declaration. You can not overload on generic constraints so 1, 2 and 3 are incompatible.

In a future, smarter version of C# maybe 2 and 3 could work together as long as 1 is not there.

Ignoring this issues, my preferred implementation (Olmo), because types are faked and are actually just Attributes, even something like this wont be allowed:
    public static class Exts
    {
        public static void Write(string text);
        public static void Write(string? text);
    }
But I don't think this is a big thing. Basically if there's something sensible to do with nulls to make overload, just don'T annotate the method as not nullable and that's it.

Also, extension methods declared on nullable-types (T) should appear on non-nullable variables and the other way arround (due to the unsafe implicit conversion).
Apr 11, 2014 at 7:53 AM
Edited Apr 11, 2014 at 7:53 AM
ssylvan:
The main thing is that I don't think this should be asymmetric since that would be hideously ugly and confusing
From a theoretical point of view, I would love a symmetric approach but there are two strong arguments for asymmetry:
  • Pain-free backwards compatibility
  • The underlying implementation is completely different, nullable value types are about adding one more valid value that is not there, requiring an extra field in memory.
So, assuming your syntax:
List<int> list = (List<int>)new List<int?>(); //invalid

class Person{
   public int Id; //valid, defaults to 0
}
But non-nullable reference types are about compile-time and run-time guarantees that a value that is perfectly valid (and the default one) is not going to be used.
List<string> list = (List<string>)new List<string?>(); //could be valid

class Person{
   public string Id; //invalid, init to = "" in the field or the constructor required
}
Symmetry, however, is the only way to avoid a invasion of ! everywhere in generic parameters. Using asymmetric syntax, you almost never want a List<Person> but a List<Person!>.
They still can use T! it's just a no-op.
Why allow T! on new projects? That will only increase confusion.
Absolutely no implicit conversions from T! to T?...and means all existing library code will continue to work without a bunch of extra noise.
It's not about making the existing libraries work, is about making the updated existing libraries work with not updated code. If a BCL developer realizes that changing:
public string[] Split(string separator)
//for
public string![]! Split(string! separator)
Is going to break zillions of code (not because the return type but the parameter) it will never get update and the feature will be not used.
So other languages could be free to create this type with "default" and break your C# program
So run-time checks should be made, or just trust the compiler and the fact that you are in C#?
Data flow analysis would be nice.
if(Rand(2) > 0 && x != null)
{
  // Type of x is ??
}
And, why not having flow analysis to type just all the rest of C#? :)
You might dislike having a giant switch that basically changes the flavor of the language, but IMO given that this is probably the most glaring language design mistake in C# having a slight inconvenience is totally worth it. The alternative is to never fix it which is deeply unappealing to me.
I will be happy to update my code to a symmetric / full compile-time guaranteed C# version with non-nullables. But there are many big corporations, with old code that they wont update, but lots of cash to buy the next Visual Studio. If the code breaks with the new C# compiler, or with the updated BCL, they won't buy it. So Microsoft won't update the BCL and the feature will be miss-used.

That's why my proposal is based in documentation using attributes, and reducing the clutter of writing the null checks yourself. Then, some optional compile-time checks can be made on top of it.

With your proposal, moving code from projects, or just copy-pasting old code, will be a pain in the ass. I'll rather stay with the asymmetric syntax and consider making the big switch in 5-10 years from now to a fully-static and, if we find it worth, symmetric syntax.
Apr 11, 2014 at 10:11 AM
Edited Apr 11, 2014 at 4:38 PM
At the moment if what to write the following extension method with guard statements of null ref-types.
I apologise in advance for the use of VB.net code
Imports ConsoleApplication1.Clusivity
Module Module1

  Sub Main()
    Dim a = "A"
    Dim z = "Z"
    Dim q = "Q"
    Dim r0 = q.IsBetween(a, z,, Inclusive) ' Would pick the first overload
    Dim zero = 0
    Dim hundred = 100
    Dim twenty = 20
    Dim r1 = twenty.IsBetween(zero, hundred,, Inclusive) ' Would pick the second overload.
  End Sub
End Module

Public Module Exts
  Public Enum Clusivity As Integer
    Exclusive = 0
    Inclusive = 1
  End Enum

  <Runtime.CompilerServices.Extension>
  Public Function IsBetween(Of T As { IComparable(Of T)})(value As T,
                                                         lower As T,
                                                         upper As T,
                                       Optional lowerClusivity As Clusivity = Clusivity.Inclusive,
                                       Optional upperClusivity As Clusivity = Clusivity.Exclusive) As Boolean
  If GetType(T).IsClass Then
    ' AndAlso GetType(T).IsNonNullable  ' For non-nullable ref-types
    If value Is Nothing Then Throw New ArgumentNullException("value")
    If lower Is Nothing Then Throw New ArgumentNullException("lower")
    If upper Is Nothing Then Throw New ArgumentNullException("upper")
  End If
    Return (lower.CompareTo(value) <= lowerClusivity) AndAlso (value.CompareTo(upper) <= upperClusivity)
  End Function

End Module
If the generic type constraints and overload resolution pick more "intelligently".
We could write the following (Note: fantasy syntax)
  <Runtime.CompilerServices.Extension>
  Public Function IsBetween(Of T) _
    Where T Is Class AndAlso
          T Is IComparable(Of T) AndAlso
          T IsNot NonNullable
          ( value As T,
            lower As T,
            upper As T,
            Optional lowerClusivity As Clusivity = Clusivity.Inclusive,
            Optional upperClusivity As Clusivity = Clusivity.Exclusive
          ) As Boolean
    If value Is Nothing Then Throw New ArgumentNullException("value")
    If lower Is Nothing Then Throw New ArgumentNullException("lower")
    If upper Is Nothing Then Throw New ArgumentNullException("upper")
    Return value._IsBetween(lower, upper, lowerClusivity, upperClusivity)
  End Function

 <Runtime.CompilerServices.Extension>
  Public Function IsBetween(Of T As {IComparable(Of T)})(value As T,
                                                         lower As T,
                                                         upper As T,
                                       Optional lowerClusivity As Clusivity = Clusivity.Inclusive,
                                       Optional upperClusivity As Clusivity = Clusivity.Exclusive) As Boolean

    Return value._IsBetween(lower, upper, lowerClusivity, upperClusivity)
  End Function


  <Runtime.CompilerServices.Extension>
  Private Function _IsBetween(Of T As {IComparable(Of T)})(value As T,
                                                         lower As T,
                                                         upper As T,
                                       Optional lowerClusivity As Clusivity = Clusivity.Inclusive,
                                       Optional upperClusivity As Clusivity = Clusivity.Exclusive) As Boolean
    Return (lower.CompareTo(value) <= lowerClusivity) AndAlso (value.CompareTo(upper) <= upperClusivity)
  End Function
The first overload would only be called if the type is a nullable ref-type, all other type would invoke the second overload. Saving the use of reflection to see if the type is a class, where the guard conditions are use to see if the values passed in are not null. Both methods can still use the third (private) extension method as this is all the differs between the two options.

The use where (which similar to how Nemerle handles generic constraints) on the generic type parameters could to some impressive usage and specialisation extension methods. Possible even prevent recursive generic type being defined.
Apr 11, 2014 at 1:54 PM
I've find it really hard to understand VB without syntax coloring, but I think you're proposing adding also a generic constraint for non-null reference types.

Why you think this is important? There's no such constraint for nullable value types either, but you can still use them in generics (using T?).

In my preferred option, where non-nullable are just expressed with custom attributes, it's impossible to create simple overloads like:
   public static class Exts
    {
        public static void Write(string text);
        public static void Write(string? text);
    }
How you will solve this?

I think we're far away from considering such complicated cases of generic overloads with constraints.
Apr 11, 2014 at 4:45 PM
Olmo look at the first block of code the extension method, is applicable to any type that is IComparable<T>, but it has reflection cost that paid for all types even if it is structure eg Int32. If the overload resolution could distinguish between the two overload like in the second (fantasy one), there wouldn't be any reflection cost, since you would know that in one overload it must be a class, so you can do the guard checks, In other you know it's value-type thus always has value, so you don't need to do the null check. Cos in essence you'd be check against Default<T> which is 0, which could produce a false-negative, if any of the values are 0.
Apr 11, 2014 at 6:22 PM
AdamSpeight2008 wrote:
Olmo look at the first block of code the extension method, is applicable to any type that is IComparable<T>, but it has reflection cost that paid for all types even if it is structure eg Int32.
The only use of reflection in that example is GetType(T).IsClass but that's completely useless. And not only that it is useless but it's actually a counter optimization, the JIT compiler eliminates the null checks in the Int32 case but it doesn't eliminate the GetType(T).IsClass call.

Anyway, if X! is supposed to mean NonNullable<X> then overloading would actually be possible without needing additional generic constraints:
public static bool IsBetween<T>(this NonNullable<T> value, NonNullable<T> lower, NonNullable<T> upper) where T : class, IComparable<T> {
    return value.Value.CompareTo(lower.Value) >= 0 && value.Value.CompareTo(upper.Value) <= 0;
}

public static bool IsBetween<T>(this T value, T lower, T upper) where T : struct, IComparable<T> {
    return value.CompareTo(lower) >= 0 && value.CompareTo(upper) <= 0;
}
Apr 11, 2014 at 7:32 PM
It couldn't eliminate the "null" checks on a struct (Int32) because Null / Nothing is the same as Default<T> or in the case of Default<Int> is 0 which is valid value for an Int
What about the case where T is a Date?

What if T is NonNullable<String>

What default value do you give a NonNullable<Foo> ?
Foo! x; /* what does x contain? */
Foo! y = Default<Foo!>;  /* What is the default value? */
Foo! z = New Foo()
        /*  | <- What value is z at this point? Cos it'll be in a invalid state.  
Apr 11, 2014 at 8:16 PM
AdamSpeight2008 wrote:
It couldn't eliminate the "null" checks on a struct (Int32) because Null / Nothing is the same as Default<T> or in the case of Default<Int> is 0 which is valid value for an Int
Nope, null isn't the same thing as default(T). Value types can't never be null so a condition like "value == null" is always false for value types, the JIT compiler knows that an eliminates the code. I don't know much VB but as far as I can tell "value Is Nothing" is exactly the same thing as C#'s "value == null". In contrast, "value = Nothing" is the same thing as "value == default(T)".
What if T is NonNullable<String>
That means you get NonNullable<NonNullable<T>>. Jon Skeet's NonNullable has a struct constraint for T so something like that doesn't compile. Much like Nullable<Nullable<int>> doesn't compile.
Foo! x; / what does x contain? / & co.
That's one of the many problems that non nullable types have and has been already mentioned (see the blog post written by Eric Lippert that's linked in the original post). Note that I'm not saying that NonNullable is a solution nor I'm advocating for it, I'm just saying that overload resolution can work if NonNullable is used. In fact, I don't believe that "the billion dollar mistake" can actually be solved.
Apr 11, 2014 at 8:55 PM
Edited Apr 11, 2014 at 8:57 PM
I don't like the idea of NonNullable<string>, it makes it clear that is a generic type, when in fact there's no additional data that have to be added, instead it has to be removed.

mdanes:
In fact, I don't believe that "the billion dollar mistake" can actually be solved.
When you say that, witch part you think won't be possible:

Step 1: Add ! just as C# documentation on public members
Step 2: Export this documentations using attributes for other languages to consume (or to consume .dlls from C#)
Step 3: Add some run-time guarantees by injecting ifs/throws here and there
Step 4: Add some compile-time warnings that can be disabled, or make them became errors, on the project properties page


.... somewhere in the future...

Step 5: Make the compile-time guarantees rock solid because now all the code should have been adapted to the new non-nullable style
Step 6: Optionally switch syntax using a automatic conversion tool to reduce clutter and make it symmetric with nullables: string! -> string, string -> string?
Apr 12, 2014 at 9:33 AM
I don't like the idea of NonNullable<string>, it makes it clear that is a generic type, when in fact there's no additional data that have to be added, instead it has to be removed.
I'm not sure what are you trying to say here. NonNullable<T> doesn't add any data, it just add rules about the data. References are variables so they have storage. That means that no matter what you do a reference will have a value - it can be garbage (in non type safe languages like C/C++), it can be null or it can be a valid reference to an object. There's no way to actually remove data, you can only try to make it so that the data respects some rules.

Ultimately I'm simply trying to highlight some issues and possibilities. For example NonNullable<T> is a distinct type and as a result it allows overloading while [NotNullable] does not. Is overloading actually needed? Perhaps not but this would be another difference between the existing Nullable - you can overload "int?" but not "string?". Not to mention that if T! isn't actually a type things get quite weird from the language point of view - if T! isn't a type then what is it? How do you describe a conversion between T! and T exactly?
When you say that, witch part you think won't be possible:
"The billion dollar mistake" was the addition of null to the type system. If you want to actually solve this then you have to take null out of the type system. That's nearly impossible to do in a type safe imperative language with a large existing user base. And even if you do it you'll likely find soon enough that some of the problems supposedly caused by null do not have anything to do with it, they're just logic bugs and a NullReferenceException is just a symptom.
Apr 12, 2014 at 4:53 PM
The reason I don't like NotNullable<T> is because it implies the following:
  • That NotNullable<T> is an expansion, not a reduction of the type. Theoretically you could write a generic type like Positive<int> but those are not very common.
  • That List<NotNullable<T>> is not convertible to List<T> or the other way around. While this could have be desirable if the feature would have been introduced from the beginning (as Nullable for value and reference types) ot allowing this conversion will break too much code when the library is updated. We have to keep it as compile-time only construct with no changes on the data structure itself.
  • That typeof(NotNullable<>) or typeof(NotNullable<string>) makes sense.
My proposed solution does not respect any of this rules, so I think NotNullable<T> is a false friend.

I prefer just T!, it's a special-syntax only feature because if behaves in a special way in the compiler: Trying to fix the the problems of the pasts without breaking too much things, and cleaning the way for a proper implementation in the future.
if T! isn't a type then what is it? How do you describe a conversion between T! and T exactly?
It's a type for the compiler, not for the CLR. Just as Java generics. The compiler erases all the non-nullable type information when compiled to the CLR, so it's compatible when calling old libraries, and compatible with other languages that choose not to implement the feature.

The compiler, however, exposes the type information to the tooling (tooltips, etc...), and exports the erased type information as attributes. Just in case the consumer of the dll is also interested.

The compiler also saves you from doing some null checks at the input and oputput parameters, and conversions from T to T!.

Finally, the compiler gives you some optional warnings that you can disable or make them become errors.

If the compiler will be able to catch 70% of the errors at compile-time, and automatic run-time checks could find other 29% earlier in the stack trace and with fever key strokes, we will be in a much better position than now. And soon we could start thinking about making the use of T! mandatory with a compile-time error, 100% compile-time guarantees, better performance (avoid run-time checks) and a nicer syntax (just ?).
How do you describe a conversion between T! and T exactly?
Converting from T! to T is a no-op. Just like converting Cat to Animal.
Converting from T to T! just checks for not-null, but makes no changes on the underlying reference. Just as converting from Animal to Cat checks for type but doesn't changes the reference.
Converting from List<T> to List<T!> or the other way around, however, makes no change at all on the reference, but will change the following generated code when accessing the list. This is unsafe, but backwards compatible. You can not ignore that 99.9% of the code is asking for a List<Person>, even if, in fact, nulls are not allowed on the list and the intention was a List<Person!>.
For example NonNullable<T> is a distinct type and as a result it allows overloading while [NotNullable] does not. Is overloading actually needed? Perhaps not but this would be another difference between the existing Nullable - you can overload "int?" but not "string?"
I think overloading is not an issue. I think a C# compiler that will be able to take non-nullability as just another criteria could be written. Now it also takes ref and out as criteria, even if is not strictly a argument type. That could make some reflection code break however, but maybe is useful enough.

So, I think is clear that a perfect solution doesn't solve this particular problem due to backwards compatibility.
  • Could my imperfect solution be better than doing nothing?
  • How far we should get doing run-time checks and compile-time warnings?
  • Is checking out parameters and return values a good idea?
  • Witch cases will be hard to catch by the warnings?
Apr 12, 2014 at 6:52 PM
Edited Apr 12, 2014 at 7:38 PM
T! -> T would be valid, as every non-null T is also a valid T.
This would be a semi- Widening Cast cos you're not losing any data but you are losing the information that it is non-nullable.
(An Implicit cast). This implies that List<T!> -> List<T>

T -> T! would be a Narrowing Cast cos you've got the potential to lose data, that it's null.
This would require an explicit cast, so the programmer is then responsible for checking the this a valid conversion. So List<T> -> List<T!> wouldn't be valid
Like a List<Fruit> -> List<Apple> because it could contain a Banana.

Is essence you could a "think" as T! as a supertype of T. Eg T! <: T

This would make the conversions compatible in vb.net's Option Strict On mode.

Passing default<T> to a T! would be a compile-time error. Just like Optional parameters require a constant expression, or new stuct type or a default<T> where T is struct.
It'll requires a constant expression or new instance of a class.
string! foo = "foo" ; 'valid
string! foo = new string('*',20); ' Valid
string! foo = default<T> ' Invalid
string! foo; Invalid

Dim foo As String! = "" ' Valid
Dim foo As String! = New String("*"c, 20) ' Valid
Dim foo As String! = Nothing ' Invalid
Dim foo As String; ' Invalid
We could then have the follow syntax to define a default instance.
public class Foo
{
 [threadlocal]
 protected static default Foo! = new Foo() ;  ' used for whenever Default<Foo!> is used.
}
Thinking about it VB.net has a form of "nonnull" for windows Forms, called default instances in vb.net.

Edit

We already have Positive<Int> that'll be UInt

If collection indice are 0 to n-1 and never negative, why do we using Integer in the Indexer rather than UInteger?
Apr 12, 2014 at 7:38 PM
I know that T -> T! is a narrowing that could potentially throw an exception, and that List<T> -> List<T!> won't be allowed for normal types, no matter what relationship there are, because List is neither covariant or contravariant.

But If you do that, nobody coud update their library to use non-nullability without breaking lots of code! We are in 2014 and we can not afford to solve non-nullability the elegant way, with so much written in C# and .Net, maybe in a future D# and .Com we can, but not this time.

I'm trying to solve the problem by creating a weak-typed solution that can catch most of the problems without breaking current code and without making updated libraries break code.
  • That's why I choose to use dirty custom attributes instead on an struct to encode non-nullability.
  • That's why I choose warnings that can be disabled (or converted to errors) for things that have all the requirements to be proper errors: they break the type system.
  • That's why I have to make some (expensive?) checks at run-time and not trust the type system.
If somebody thinks this approach is not radical enough and we should break every single line of code that uses the framework, or that the framework should not be updated, or that there are particular details of my solution that I'm missing, I'm open to discussion.

But please don't explain me why my solution is not perfect, I know, really, I've read the two blog post that I mentioned at the very beginning.

So, how the code will look like in your examples:
string s = rand(2) == 1 ? "Hello" : null;

string! s2 = s; // implicit conversion, but creates a warning that could be disabled per-project, or removed in this particular case adding a explicit cast, also a run-time check is made 
Will get compiled to something like:
string s = rand(2) == 1 ? "Hello" : null;

string s2;  //not-nullable is gone
if(s == null)
   throw CastException("Impossible to cast null value to non-null reference type"); 

s2 = s;  
And for lists
List<Person> list = GetPerson(); 

List<Person!> list2 = list; //implicit conversion with no run-time checking, and no warnings

list2.Add(null); //warning and runtime-check 

Person! person = list2[0]; //runtime-check
Will get compiled to
List<Person> list = GetPerson();  //not-nullable is gone

List<Person> list2 = list; 

if(null == null) 
    throw new ArgumentNullException("item");

list2.Add(null); 

Person _temp = list2[0]; //not-nullable is gone
if(_temp == null)
    throw new ArgumentNullException("_temp"); 
Person person = list2[0]; 
Apr 12, 2014 at 7:41 PM
I know about uint, i was trying to make an argument about generic types reducing expressiveness instead of decreasing. Maye Positive<int> was not the best example, PrimeNumber<int>, or OddNumber<int> would be better examples.
Apr 12, 2014 at 7:44 PM
Edited Apr 12, 2014 at 7:47 PM
@Olmo You've read my example incorrectly.

Converting a T! to T aka T! -> T aka Func< T! , T > would be a valid implicit conversion.
Converting a T to T! aka T -> T! aka Func< T , T! > would require an explicit cast.
Apr 12, 2014 at 8:01 PM
Edited Apr 12, 2014 at 8:02 PM
I don't think default instances are a good idea, even being ThreadStatic, two un-initialized variables will get the same instance. Even if a new one will be created each time you will get unexpected results. There are also no default values for most of the types.

You make a valid point on ThreadStatic, AFAIK there's no way to run code to initialize ThreadStatic variables every time a new thread is created, so non-nullable variables should be forbidden.

Instead, we could write this pattern:

static Foo field; 

public static Foo! Property  
{
    get { return (Foo!)(field ?? field = new Foo()); }
}
Apr 12, 2014 at 8:02 PM
Edited Apr 12, 2014 at 8:07 PM
Olmo, sounds like Dependant Types. how does F# handle Units Of Measure?

An uninitialised T! would be a compiler error.
!foo a = default<Foo!>
!foo b = default<Foo!>
' a and b would point to the same instance of Foo!.
Like
color a = Color.Blue;
color b = Color.Blue;
would point to the same instance of Color
Apr 12, 2014 at 8:05 PM
Converting a T! to T aka T! -> T aka Func< T! , T > would be a valid implicit conversion.
Converting a T to T! aka T -> T! aka Func< T , T! > would require an explicit cast.
I think I get it right, look here I'm converting from string to string! with no casting at all:
string s = rand(2) == 1 ? "Hello" : null;

string! s2 = s; // implicit conversion, but creates a warning that could be disabled per-project, or removed in this particular case adding a explicit cast, also a run-time check is made
Apr 12, 2014 at 8:09 PM
Olmo, sounds like Dependant Types. how does F# handle Units Of Measure?
They are not very specific but they say:
Units of measure are used for static type checking. When floating point values are compiled, the units of measure are eliminated, so the units are lost at run time. >Therefore, any attempt to implement functionality that depends on checking the units at run time is not possible. For example, implementing a ToString function to print out the units is not possible.
There have to be a way to know this information when you're referenced a compiled dll.. or is not? Anyway adding attributes looks like a sensible solution.
Apr 12, 2014 at 8:59 PM
TL;DR: Syntactic sugar is needed so third party dev-time tools can achieve most of the goals of static analysis without all of the boilerplate currently necessary.

The nullable problem can be solved largely at compile/static analysis time as seen in Code Contracts and ReSharper's NotNull attribute. The problem with Code Contracts is that the amount of boilerplate code required to fully validate your program is incredibly high. I recently did an entire medium sized project with full Code Contract validation. I would say that over half of my code was just Code Contracts for NotNull and it made my classes incredibly ugly and difficult to read. That being said, I have no NREs and my code is very easy to reason about (once you learn to visually ignore the Code Contracts stuff).

I also have a large legacy codebase I am working with and have been using Resharper's NotNullAttribute on any code I touch. It has greatly improved my ability to reason about the code and the static analyzer tells me when I get something wrong. Because this is purely a static analysis tool, it integrates well with legacy code.

In my opinion, the thing that would be the biggest boon isn't runtime checking but simply making it easy to let static analyzers know that a particular member is intended to be Non-Null. At that point third party tools and perhaps one day Roslyn itself can do static analysis leveraging those tokens. I can also see an opportunity for third party tools to inject validation (like runtime Code Contracts does) into the IL at compile time.

Olmo's suggest of simply allowing the C# language to have a ! token would go a long way toward this. It would let third party tool developers see the token in the syntax tree and then take action when running static analysis.

As Olmo has said before, this isn't guaranteed NotNull, this is a static analyzer hint. I believe there is great value in providing hints to the static analyzer. Roslyn itself can take this however far it wants, the language just needs to allow for it so third party tools can do it in a standard way.

Caveat: The problem with attributes in the underlying IL is that I don't believe they can be applied to generic types. This is the biggest problem with ReSharper's [NotNull]. I can't have List<[NotNull]String>. If ! was used, then the NotNull information would be lost at IL generation time: List<String!>. I'm not sure if this is a solvable problem, but again... allowing for it would at least allow static analyzers to leverage it in the current assembly. Perhaps an attribute can be put on the generic instantiation so the information is retained (albeit, difficult to leverage)?
Apr 12, 2014 at 9:28 PM
The nullable problem can be solved largely at compile/static analysis time as seen in Code Contracts and ReSharper's NotNull attribute.
I'm interested, does Resharper/Code makes some assumptions on third-party code? Or .Net Framework code at least? Or will if be happy with something like:
"hello, world!".split(null)
In my opinion, the thing that would be the biggest boon isn't runtime checking
But then you will have to repeat things twice:

public void WriteLine(string! str){
   if(str != null) 
       throw new ArgumentNullException("str"); //I'm getting boooored
}
Also, I don't think that non-nullable is a corner thing that should be provided by third-party static analyzers. It's the biggest problem and there should be a solution out of the box. I would love to a more pluggable compiler for any other scenarios, however.
The problem with attributes in the underlying IL is that I don't believe they can be applied to generic types
I've proposed that the NotNullAttribute has a string (or bitfield) with the indexes of the notnullable types in the declaration, so:
List<string>! -> NotNull("1")
List<string!>! -> NotNull("1,2")
List<string!> -> NotNull("2")
Dictionary<string!,string!> -> NotNull("2,3")
Dictionary<string!,List<string!>!> -> NotNull("2,3,4")
this attribute can be place in any class member (Properties, Method return types, Method arguments...), so the covered surface should be quite big. You can even place attributes on delegate parameters, I've tested it!

Inheritance / Implementation could make it more complicated.
public class  MyClass : MyBase<string?>, IComparable<string?>,  IComparable<MyBase!>
can not be compiled to
public class  MyClass : [NotNull("1")]MyBase<string>, [NotNull("1")]IComparable<string>,  [NotNull("1")]IComparable<MyBase>
So maybe it should be compiled to something like
[NotNullBase(typeof(MyBase<string>), "1")] // for the base class MyBase
[NotNullInterface(typeof(IComparable<string>), "1")] // for the first interface implementation
[NotNullInterfae(typeof(IComparable<MyBase>), "1")] // for the first interface implementation
public class MyClass  : MyBase<string>, IComparable<string>,  IComparable<MyBase>
Apr 13, 2014 at 9:34 PM
Olmo wrote:
Why allow T! on new projects? That will only increase confusion.
Because it allows gradual upgrading of legacy code. Start with the "opt out switch" set, then gradually introduce T! and T? (which is a no-op at this point) where needed, then turn off the switch. You can go back and remove all the T! after that, but you don't have to.
It's not about making the existing libraries work, is about making the updated existing libraries work with not updated code. If a BCL developer realizes that changing:
public string[] Split(string separator)
//for
public string![]! Split(string! separator)
Is going to break zillions of code (not because the return type but the parameter) it will never get update and the feature will be not used.
I wouldn't expect them to just change the types of existing library methods, just like they didn't when generic were introduced. What you do instead is make a new version and rely on overload resolution to pick the newer one in code that's aware of the new features, but allow legacy code to use the old ones. Just like with generic collections vs. non-generic ones.
So other languages could be free to create this type with "default" and break your C# program
So run-time checks should be made, or just trust the compiler and the fact that you are in C#?
The CPU/OS already does runtime checks for any pointer access, you can't avoid it. Catch the access violation and raise an NPE.
Data flow analysis would be nice.
if(Rand(2) > 0 && x != null)
{
  // Type of x is ??
}
The only way to get into the body of that if-statement is if x is non-null, so would coerce the type to be its non-nullable version.
I will be happy to update my code to a symmetric / full compile-time guaranteed C# version with non-nullables. But there are many big corporations, with old code that they wont update, but lots of cash to buy the next Visual Studio. If the code breaks with the new C# compiler, or with the updated BCL, they won't buy it. So Microsoft won't update the BCL and the feature will be miss-used.
This didn't happen when generics were added, so I'm not sure why it would happen this time. Old code still works and can interop with new code. There's a path for gradual upgrading. New code can use the new stuff exclusively. It seems completely analogous to the non-generic -> generic transition.
Apr 13, 2014 at 9:41 PM
Another option is to have two levels of "opt out" for the non-nullable references feature.
  1. Transition flag. Old code works fine, but to interop with any new code that has non-nulls you have to explicitly use !
  2. Legacy flag. Old code works fine, and any non-nullables that come in transparently converts to nullables (with runtime checks like before). This means you can use any new BCL features that rely on non-nullable references (you just won't get the compile time checks).
If you only have 2) then you could indeed convert the entire BCL to non-nullable style. The downside is that the converting legacy code is less gradual because non-nullable references don't "propagate" since they silently convert to nullable references.
Apr 13, 2014 at 11:28 PM
Hey I really like this idea! But is the harder to implement:

All the code in the transition mode (strict mode) will be need to be 100% type safe at compile time, so no performance is lost in runtime checks.

On the other side, on the frontier between legacy and strict mode, null checks should be necessary. So this mean that the checks should be made on the client code, not on the method implementation.

One problem also is calling strict assemblies from languages that do not support the feature. They will need to implement the checks.

Another is what you do with generic types, you'll be strict on them? So there's no way to cheaply convert a List<Person> to a List<Person!>
Apr 14, 2014 at 12:52 AM
Edited Apr 14, 2014 at 1:27 AM
 if(Rand(2) > 0 && x != null)
 {
   // Type of x is ??
 }
Why would it coerce it?
If x is T then x would still be a T it just has a non-null value. Inside the body of the condition x wouldn't suddenly be a T!.

The Problem is T -> T!

T! needs to be concrete type like Nullable<T> to guarantee both compile and runtime type safety. In Java generics the generic type information is forgotten at runtime, and doesn't ensure type safety, whereas C# does remember at runtime (cos it encoded in the type) and ensure runtime type safety.
If it is a compile-time "trick" and that information is forgotten, then that if I using some compiled library dll, that use T! and that type information is forgotten. It essentially become T and null is a valid input again.
Whereas if it isn't forgotten then I couldn't pass it a null value both at compile-time or runtime. Since that information is encoded by the typing information.

It would be better if the compiler could distinguish overloads of T and T!. Using T! where a T is expected would be a valid implicit conversion, so still compatible it existing libraries. The reverse conversion T to T! would require an explicit conversion, just like converting a Int32 to a Byte (since not all the values of Int32 are valid values for a Byte. And would be still compatible with existing libraries.

An essence T -> T! is a Narrowing conversion, since you're reducing the number of valid values for T.

List<Person> -> List<Person!> is essentially equivalently to doing List<Object> -> List<String>. Which is potentially not a valid operation, because the original can contain something that isn't a String. To do a conversion from one to the other would require performing a filter (filtering out the null elements) then doing a cast to T!
Eg.
List<String>.Where( x=> x <> null ).Cast<String!>.ToList
// With a NonNull extension method. 
List<String>.NonNull.Cast<String!>.ToList
If there was INonNullEnumerable<T> like there is an IOrderedEnumerable
// NonNull

 static INonNullEnumerable<T> NonNull<T where T is Class> (this IEnumerable<T> xs )
 {
   return New NonNullEnumerable<T>( xs.Where( x=> x <> null ) );
 }
then INonNullEnumerable<T> -> IEnumerable<T!> would be a valid conversion.

IEnumerable<T!> equates to IEnumerable<T where T : Class> with the additional constraint foreach x In IEnumerable<T> is x(T) <> null

Existing libraries wouldn't have to be changed. If fact the would be extend / refactored to include overload version of the existing methods.
Apr 14, 2014 at 7:33 AM
AdamSpeight2008:
Why would it coerce it?
If x is T then x would still be a T it just has a non-null value. Inside the body of the condition x wouldn't suddenly be a T!.
Is an answer to:
ssylvan:
Data flow analysis would be nice. So any code of the form "if (x != null) { .... x .... }" the x will have a non-nullable type inside the body. This would make upgrading code much easier since many types will already have been converted to T! by prior null checks.
AdamSpeight2008:
In Java generics the generic type information is forgotten at runtime, and doesn't ensure type safety, whereas C# does remember at runtime (cos it encoded in the type) and ensure runtime type safety.
I know from the compiler point of view will be a broken solution, but will be a solution that you could actually use! The other options is to have 100x more out-dated code (or breaking changes) than when generics where introduced in C#:
olmo:
Even if the solution is 100% backwards compatible from a language standpoint (the new compiler will be able to compile the old code) the solution is almost 0% backwards compatible from for the updated API.

Think about it, generics where invented in 2005, C# was like 3 years old. The amount of code written in C# since them has, at least multiplied by 10x.

Also, generics require that you updated your collections. How many lines of code could would break when you remove all the ArrayList and change them for List<T>?, similar for Dictionaries, etc... 10% at most.

Now let's see how much lines will change if you make non-nullable all the properties, method arguments and return types that theoretically should be not-nullable?
At least 90%.

So, a gross estimation is that, globally, we are dealing with a problem that is 100x times bigger than when .Net generics where introduced.
AdamSpeight2008:
List<Person> -> List<Person!> is essentially equivalently to doing List<Object> -> List<String>
From a type theory point of view, you right, they are just as unsafe. From a practical point of view they are completely different problems:

When generics where invented in C# 2.0, there was no conversion from List<Object> -> List<String> written.

Now, there are zillions of List<T>, Dictionary<K,V>, ... and 99% of them actually mean List<T!>!, Dictionary<K!,V!>!... you don't give an answer to this problem:
  • Libraries have two options: break the code or ignore the new feature.
  • Clients have two options, make really expensive O(N) conversions or ignore the feature.
I bet you one billion $ both choose to ignore the feature :)


AdamSpeight2008:
Existing libraries wouldn't have to be changed. If fact the would be extend / refactored to include overload version of the existing methods.
  1. You can not overload on return types and
  2. You can not overload properties.
  3. You can not overload on your inheritance chain: class Team : List<Player!>
  4. How many overloads you put in a method with 4 reference types? 2? 2^4 = 8?
  5. And what about generics? A method taking a List<string>, should be overloaded to
    • List<string!>
    • List<string>!
    • List<string!>!
Think that 99% of the data structures out there are not meant to have null values. You'll have to update everithing.

And It's even worst! What's the point of having a rock-solid compiler if at the end most of the methods have unsafe overloads?
Apr 14, 2014 at 10:53 AM
What I'm proposing doesn't involve rewrite any of the much of existing methods.
The String! and String overloads can share the internals, so long as the public method signature remain the same overloads.

1) No you'll be overloading on input types not return types, also means you can have different return type.
 public string Substring ( string s, Int32 Index )
 { 
   if( s == null){ throw new NullReferenceException(); }
   /* code that was here is move into internal version of method */
   return _Substring( s, Index ); /* Implicit cast from String! -> String   (since every T! is a valid T) */
 }
 public String! Substring( String! s. Int32 index ) /* different input signature so can be an overload. */
 {
   return  _String.Substring( s(string). index);
 }
 internal String! _Substring( string s. Int32 index)
 {
  if( (index < 0) || (index >= string.Length)) { throw new ArgumentOutOfRangeException(); }
  /* code for doing a substring here now */
 }
or alternative method
 public string Substring ( string s, Int32 Index )
 { 
   /* code that was here is move into internal version of method */
   return _Substring( s, index. true ); /* Implicit cast from String! -> String   (since every T! is a valid T) */
 }
 public String! Substring( String! s. Int32 index ) /* different input signature so can be an overload. */
 {
   return  _Substring( s(string). index. false);
 }
 internal String! _Substring( string s. Int32 index, boolean do_null_checks)
 {
  if( do_null_checks ) /* extra guard condition */
  {
     if( s == null){ throw new NullReferenceException(); }
  }
  if( (index < 0) || (index >= string.Length)) { throw new ArgumentOutOfRangeException(); }
  /* code for doing a substring here now */
 }
2) Then the programmer using those properties would require an explicit on the results, or the implementer of those properties to use the nonnull version of the type, if that what the intent of the method is.
3) Why would you want to overload via inheritance? You're the programmer deciding.
4) Just two.
  1. All ref types (the existing method)
    If any of them are potentially null then, presume all are.
  2. All non-null ref type (the overload of the method)
    The ref version can use the same shared method. (Just validate the input and pass it on the shared method.
    See example above.
5) If you're that paranoid about possible input types, Olmo then yes do that. Me I'd just write the two overloads explained above (4).
Think that 99% of the data structures out there are not meant to have null values. You'll have to update everithing.
99% of the data structures out there, should then already have null reference checks on the inputs. So I don't think they would need changing.
And It's even worst! What's the point of having a rock-solid compiler if at the end most of the methods have unsafe overloads?
Backwards compatibility of existing code.

As I see it as a coder you either stay in the non-null world or stay in the null world. Inputs are non-null, outputs are non-null.

I wonder if it be possible to write a Roslyn code fix to produce the overloads for you?
Apr 14, 2014 at 11:09 AM
From a type theory point of view, you right, they are just as unsafe. From a practical point of view they are completely different problems:
No there the same problem.
  X <: B
  Y <: B
Is ((X as B) as Y) safe?
`
Apr 14, 2014 at 1:05 PM
What I'm proposing doesn't involve rewrite any of the much of existing methods. The String! and String overloads can share the internals, so long as the public method signature remain the same overloads.
I know that you don't have to rewrite the bodies of the functions, but you'll have to duplicate the overloads for pretty much any function... that's lot of work and not beautiful in the intellisense.
1) No you'll be overloading on input types not return types, also means you can have different return type.
The method that return non-null but take nothing, or what they take has no relationship, will still be an issue: Console.ReadLine(), List indexers, and a long etc...
2) Then the programmer using those properties would require an explicit on the results, or the implementer of those properties to use the nonnull version of the type, if that what the intent of the method is.
So, uglier code on the client side or breaking change for at least 60% of the properties... a horrible dilemma that you have to resolve in every property and most of the methods.
3) Why would you want to overload via inheritance? You're the programmer deciding.
I don't want to overload via inheritance, I want to inherit from List<Person!> without breaking code, overloading is your only answer and here doesn't work.
4) Just two
I agree that, for most practical purposes, two overloads will be enough, as long as no generics are involved
5) If you're that paranoid about possible input types, Olmo then yes do that. Me I'd just write the two overloads explained above (4).
Once you take generics into account, with your alternative all the casting operations become O(N) operations and require allocations of big objects. That can really affect performance, so lots of overloads will be necessary to avoid that, and we'll still have problems with properties and method return types.
99% of the data structures out there, should then already have null reference checks on the inputs.
Just this two already cover 70% of the cases.
List<string> str = new List<string>();
str.Add(null); //no exception here

Dictionary<string, string> dic = new Dictionary<string, string>();
dic.Add("key", null); //also not here
So I don't think they would need changing.
Even if all of them will validate for null, you'll need to update the code to make the signatures non-nullable so you have compile-time warning / documentation instead of run-time exceptions... this is what all of this is about.
Backwards compatibility of existing code.
I know backwards compatibility is the reason to write a new overload for...almost any method, but the end result is that you have the same problems you have before and lot of new redundant code to maintain.

Conclusion:
If we'll start from scratch I will make a struct rock-solid solution as you suggest, possibly using just Nullable<T> for reference and value types, and maybe in a future we can do that, but for now the guarantees have to be optional and easy to disable or it won't get into C# / .Net.
Apr 14, 2014 at 6:21 PM
Edited Apr 14, 2014 at 6:26 PM
@Olma you really need to think about the example you provide.
List<string> str = new List<string>();
str.Add(null); //no exception here

Dictionary<string, string> dic = new Dictionary<string, string>();
dic.Add("key", null); //also not here
Those types in TKey and TValue (String and String respectively) are nullable ref-type so null in both are valid input.
Dictionary< string, string > dic = new Dictionary< string, string > ();
string! k = "Apple";
string! v = "Pie";
dic.Add ( k , v ); // still valid input (via the implicit cast of T! -> T
// or the CLR runtime is aware of non-null ref and passes in the reference to internal field _Value _ 
If you change those check for null reference on input, would break exist code.

Console.ReadLine can never return a String! (From MSDN Console.ReadLine)
If the Ctrl+Z character is pressed when the method is reading input from the console, the method returns Nothing. This enables the user to prevent further keyboard input when the ReadLine method is called in a loop. The following example illustrates this scenario.
You can not presume the intent of the implementer when a parameter or return type is T they actually meant T! eg Foo! instead of Foo.

None of the existing parameters that take reference type need to be fixed, as are still valid if you pass it as an argument a non-null reference. Only downside is it would still perform the null reference check, if it has them

TryCast for T -> T!
bool TryCast< T : class >( T input, out T! output )
{
  if ( input == null )
  {
    if ( (T).GetType().HasDefault() ) 
    {
      output = default(T!); // Use the default instance class of this T. (lazy evaluated only create it if it is actually used)
      return false;
    }
    else
    {
      throw new NonNullRef_MissingDefaultInstanceException();
    }
  }
  else
  {
    output = New T!( input , IgnoreNullChecks:= True ); // Lifts the T into a T!
    return true;
  }
}

 
Apr 14, 2014 at 6:42 PM
If Overloading on return types was allowed, the you could sort of dope/taint the output type.
if( target.type == source.type ) 
{
   target = source
}
else
{
  If( source.HasOverloadThatReturns( target.type )
  {
    method m = source.GetOverloadThatReturns( target.type );
    target = m.Invoke();
  }
  elseif( source.Type.IsCastableTo( target.type )
  {
    target = source(target.type);  
  }
  else
  {
    thrown new IncompatibleTypeException("Type {0} isnot castable to Type {1}",source.type, target.type );
  }
}
Apr 14, 2014 at 6:57 PM
Edited Apr 14, 2014 at 7:06 PM
@AdamSpeight2008
Those types in TKey and TValue (String and String respectively) are nullable ref-type so null in both are valid input.
You where the one that said that the collections should check for null, not me. I'm just showing you that it's not true.

My problem is not that non-null values can not get in in nullable collections, its exactly the oposite!
Dictionary<string!, string!> dic = new Dictionary<string!, string!> ();
string k = "Apple";
string v = "Pie";
dic.Add ( k , v ); // compile time error
That looks reasonable if all of this is your code, but not if its third party code:

In .Net framework:
PropertyInfo[] GetProperties() 
In my code:
PropertyInfo[] props = GetProperties(); 
string name = prop[0].Name
In .Net framework updated:
PropertyInfo![]! GetProperties() 
But my code:
PropertyInfo[] props = GetProperties();  //invalid conversion at compile time
Now I can cast the whole array:
PropertyInfo[] props = GetProperties().Cast<PropertyInfo!>().ToArray();  //slow
Or change it:
PropertyInfo![]! props = GetProperties(); 
string name = prop[0].Name; //now the error is here
but probably you are returning this from your method or property, so using your feature is a viral breaking change that pusses you to virtue, whether you like it or not.
PropertyInfo![]! props = GetProperties(); 
string! name = prop[0].Name; 
Particularly I'll be happy with it, if I've to spend a month removing nulls from the code base I'll do it, (I've currently 151 errors in my VS because a different reformation), but this is not the case for most of the .Net shops, and Microsoft knows it.
Console.ReadLine
Ok, your right, there's a case where ReadLine returns null, I was just choosing a well-known method.

But stop cherry picking your counter examples, just take the MSDN, most of the methods check for non-nullability on the arguments and most of properties and methods that return reference types do not return null.
You can not presume the intent of the implementer when a parameter or return type is T they actually meant T! eg Foo! instead of Foo.
Of course I can't. I'm just estimating, and my estimation is that around 60% of them will have to face your dilemma: Break code or not use your feature.
The policy from Microsoft is clear: Don't use the feature.
None of the existing parameters that take reference type need to be fixed, as are still valid if you pass it as an argument a non-null reference. Only downside is it would still perform the null reference check, if it has them
Of course, but you won't get the benefits of the compiler ensuring the you're not sending null. Basically you don't use the feature.

C'mon surrender, it's clear that its a big issue (one billion $), there are huge amounts of codes that will have to change, and you have to give them an upgrade plan:
  1. Disable warnings in all projects
  2. Enable warnings in some projects
  3. Fix warnings
  4. If there are more projects, goto 2
  5. Set warnings as errors
  6. Go to Heaven! (NonNullable<Heaven>)
This process is going to take time, everyone needs to choose when to do that and Microsoft needs to continue selling VS in the meantime.

Attributes and warnings are the only way to archive this flexibility.
Apr 14, 2014 at 7:35 PM
Edited Apr 14, 2014 at 7:42 PM
I said
... should implement null checks ...
this doesn't imply that they will have null-checks.
Dictionary<string!, string!> dic = new Dictionary<string!, string!> ();
string k = "Apple";
string v = "Pie";
dic.Add ( k , v ); // compile time error
even if it was 3rd party code, it still would be a compile-time and runtime error.
(There is no implicit cast / conversion of nullable ref type to non-null ref type. If you want to do this do an Explicit cast)
Dictionary<string!, string!> dic = new Dictionary<string!, string!> ();
string k = "Apple";
string v = "Pie";
dic.Add ( k(string!) , v(string!) ); 
Why are assuming that an updated .net will change the existing method signatures to non-null versions?!. Microsoft and others wouldn't do this as would certainly break existing code and annoy consumers of their code.

Do all of the method that use a reference type as a return type always return a non-null value? no.
Of course I can't. I'm just estimating, and my estimation is that around 60% of them will have to face your dilemma: Break code or not use your feature.
All the need is for them provide a convertor that provides to non-null version of their type, if they're sure it always returns ref of a non-null value. Doesn't break existing code. If the consumer want non-null they get it.
Apr 14, 2014 at 9:06 PM
Edited Apr 14, 2014 at 11:18 PM
(There is no implicit cast / conversion of nullable ref type to non-null ref type. If you want to do this do an Explicit cast)
This is what I don't agree with. With your solution, breaking changes are going to happen just everywhere.
Why are assuming that an updated .net will change the existing method signatures to non-null versions?!. Microsoft and others wouldn't do this as would certainly break existing code and annoy consumers of their code.
Of course if you don't use it won't break anything. I was assuming that we're speaking about a useful feature that we can use, not funny new toy to experiment. We can also implement javascript undefined and crazy this, as long as we don't use it will be OK.
All the need is for them provide a convertor that provides to non-null version of their type, if they're sure it always returns ref of a non-null value. Doesn't break existing code. If the consumer want non-null they get it.
I'm not sure what you mean by a convertor, but I think it will breaks the code because now it doesn't compile! If your method returns a List<MethodBase> and, even if 100% of the time the elements are MethodInfo (MethodInfo inherits from MethodBase), changing it to List<MethodInfo> will break code so it's a breaking change. The same applies to List<string> vs List<string!> with your solution (but not with mine)
Apr 14, 2014 at 10:21 PM
How do we apply your NonNull attribute to a local variable?
Apr 14, 2014 at 11:17 PM
In my solution NotNull is a compilation time illusion, just like java generics.

In order to preserve this illusion in other assemblies Attributes are necessary for each non-nullable property, fields, method argument and return type, and base classes, considering also generic clases.

Local variables don't need to be exported and are a hidden inside method bodies, so no Attributes are necessary, but the C# compiler, while compiling, knows which variables are marked with not-null ! and can use this information to create meaningful warnings/errors and emit the necessary run-time-checks.

My main design issue is wher the run-time checks should be emitted for not-null method arguments and return types: In the client code or inside of the method implementations.
Apr 26, 2014 at 9:55 AM
After thinking more about it, I think a faster way to cast to non-nullable will be necessary.

Of course, conversions from T! to T are implicit, just as conversion from T to T? for null-able types. But the other way around will require a casting to remove the warning/error.

Since this type of conversions are going to happen very often once the .Net libraries get updated, writing a casting could be way to long, specially for generic types. Also, since it's not based in a struct, there's no .Value member, so casting anonymous types will be impossible.

Non-nullable casting operator

My proposal is using the ! symbol as non-nullable casting operator.
List<string!>! names = new List<string!>!(); 

string str = "hi";

names.Add(!str); //non-nullable casting 
There's no reason to keep it just for non-nulable reference type, we could also use it for value types:
List<DateTime>! names = new List<DateTime>!(); 

DateTime? now = DateTime.Now;

names.Add(!now); //non-nullable casting 
This symbol could get confusing when used with bool?, or any other type with ! operator overloaded. I think it should take precedence but maybe is better to use a safer approach:
bool? flag = true; 

!flag //true

!!flag  //false
But maybe is better to use a safer approach, using (!) instead of !.
bool? flag = true; 

(!)flag //true

!(!)flag  //false
The added benefit is that we could also define (?) operator, that even if its implicit it'S useful in some situations:
DateTime? flag = isMonday? DateTime.Now: null; //currently is an error
DateTime? flag = isMonday? (DateTime?)DateTime.Now: null;  //Annoying casting 
DateTime? flag = isMonday? (?)DateTime.Now: null; //Little bit less annoying
May 7, 2014 at 12:05 PM
Edited Jun 17, 2014 at 2:45 PM
I think C# should start with small steps..

First step is "!" The syntactic sugar for automatic (x!=null checks)
Allowed only on:
  1. Properties
string! Name
{
          get{ return name; }
          set{ name = value; } 
}

compiles into
string! Name
{
          get
          { 
                    if(name ==null)
                              throw new NullReferenceException("value is null");
                    return name;
           }
          set
          {
                    if(value ==null)
                              throw new NullReferenceException("value is null");
                    name = value; 
          } 
}
  1. Function return types
string! Foo()
{
 ......
return x;
}
compiles into
string! Foo()
{
 ......
         if(x ==null)
                  throw new NullReferenceException("return is null");
         return x;
}
  1. Function parameters
string! Foo(string! p)
{
 ......
return x;
}
compiles into
string! Foo(string! p)
{
        if(p ==null)
                  throw new ArgumentNullException("p is null");
         ......

         if(x ==null)
                  throw new NullReferenceException("return is null");
         return x;
}
And to make it future proof i would just add [CheckNotNull] attribute:
Compiled version of method above:
[CheckNotNull]string Foo([CheckNotNull]string p)
{ ....
With this compiler or IDE can rise some compile time warnings. Code will crash in very specific places. We would write less code (LESS CODE!). And if CLR in some future version will get the NonNullable support, i would just add more ! usage cases.

And if if(x ==null) would be always false on NonNullable classes there won't be any change in language behavior.
May 7, 2014 at 5:28 PM
Edited May 7, 2014 at 5:30 PM
Olmo wrote:
My proposal is using the ! symbol as non-nullable casting operator.
The usual meaning of ! in C is negation rather than emphasis. I think the idea of an operator to indicate "perform an explicit cast if needed" is an essential idea which is missing in the language in other contexts (see my "narrow-if-necessary" post). Basically, the idea should be that if the compiler knows how to convert x to the type of y, but isn't sure that's what the programmer wants, there should be a way to say "you know what conversion is necessary--just do it".

If null-is-forbidden types are added, it would be helpful to have a means of distinguishing between "Assert X is not null, regardless of how it's being used" and "Assert X is not null if the recipient needs something non-nullable". If a null-forbidden piece of code is rewritten to be null-agnostic (both patterns have their uses), it would be desirable to avoid having to hunt down and remove any full-forbidding casts that are no longer appropriate.

Incidentally, if non-nullable types are added, I think such an addition could usefully go hand-in-hand with an enhanced constructor proposal I wrote which would statically verify that read-only fields get written exactly once in every possible execution path through every possible constructor and could never be seen in an unwritten state unless a constructor throws but a call to Finalize() resurrects the partially-constructed object.
May 7, 2014 at 7:58 PM
Edited May 7, 2014 at 7:59 PM
ttxman wrote:
I think C# should start with small steps..
I couldn't agree more! My whole point is that the pursuit for perfection is stopping the issue for way to long, and my initial attempts are just exactly what you purpose.

Adding an attribute will be necessary not only for future proof. Also to provide good IntelliSense on other languages, like VB, or when you only need the Assembly.

Another important thing is whether to allow List<string!> or not. I think we should, and move the validations to the client code when using Add, ElementAt, or the indexer, but do not validate deeply when assigning List<string> to List<string!>. This information should also be encoded in the attribute.

Finally, once you have all this static information, not making some kind of opt-in static validation shouldn't hurt, isn't it? The developer could chose to have errors, warnings or nothing at all for this particular project.

In order to make casting faster to non-nullable and nullable types, I've also proposed (!) and (?) operators.
May 7, 2014 at 8:08 PM
We need to fix the billion dollar mistake. C# is the best language. It shouldn't be a mine field of null reference exceptions.
May 7, 2014 at 8:13 PM
Edited May 7, 2014 at 8:14 PM
supercat wrote:
The usual meaning of ! in C is negation rather than emphasis.
I know, I was proposing (!) not !
I think the idea of an operator to indicate "perform an explicit cast if needed" is an essential idea which is missing in the language in other contexts (see my "narrow-if-necessary" post). Basically, the idea should be that if the compiler knows how to convert x to the type of y, but isn't sure that's what the programmer wants, there should be a way to say "you know what conversion is necessary--just do it".
I've read your proposal and I could agree with the idea if you find a better syntax than (var). If C# team would have choose auto instead of var it would have make much more sense, and could also have been used to help the compiler with type inference like in:
Func<auto> factory = ()=>new { Name = "John"}; 
But with var, is just doesn't feel right.

Also I don't feel a big pressure for numeric casting while, if non-nullable conquer the BCL, the (!) operator is going to be really needed. Fortunately is sort.
If null-is-forbidden types are added, it would be helpful to have a means of distinguishing between "Assert X is not null, regardless of how it's being used" and "Assert X is not null if the recipient needs something non-nullable". If a null-forbidden piece of code is rewritten to be null-agnostic (both patterns have their uses), it would be desirable to avoid having to hunt down and remove any full-forbidding casts that are no longer appropriate.
Even if it does nothing, seeing redundant (var) castings also wouldn't be beautiful. I'm not worried by T! -> T changes. T -> T! worry me much more.
Incidentally, if non-nullable types are added, I think such an addition could usefully go hand-in-hand with an enhanced constructor proposal I wrote which would statically verify that read-only fields get written exactly once in every possible execution path through every possible constructor and could never be seen in an unwritten state unless a constructor throws but a call to Finalize() resurrects the partially-constructed object.
Where's your proposal?
May 7, 2014 at 9:50 PM
Olmo wrote:
But with var, is just doesn't feel right.
I'm not crazy about the syntax; I chose it largely because it uses a reserved word in a way that has no other legal meaning. Actually, for most cases I'd rather see a means of saying "don't bug me about conversions with specific variables, parameters, method returns, etc." but that proposal didn't seem to go over well either.
Also I don't feel a big pressure for numeric casting while, if non-nullable conquer the BCL, the (!) operator is going to be really needed. Fortunately is sort.
The ease of refactoring code in future will depend upon how it is written today. If it's necessary to change code from using one type to using another, such refactoring will require examination of every single cast to determine whether it means "I know the value is about to be used as X" and "I want the value to be converted to type X, regardless of what's done with it afterwards". Since whoever wrote the code is going to know which meaning applies, it seems senseless not to have a means of specifying that. Rather than offering a single casting form which is only applicable to null-forbidden types, I think it would be better to attack the more general problem even if the most common usages are associated with null-forbidden types.
Even if it does nothing, seeing redundant (var) castings also wouldn't be beautiful. I'm not worried by T! -> T changes. T -> T! worry me much more.
If e.g. a collection type is changed from forbidding nulls to allowing them, any cast-to-non-nullable operations which are not removed will cause runtime exceptions. If some casts remain necessary (e.g. a dictionary-ish type is changed to allow null values but not null keys), I would see some danger of accidentally leaving an erroneous cast-to-non-nullable in a seldom-used corner path.
Incidentally, if non-nullable types are added, I think such an addition could usefully go hand-in-hand with an enhanced constructor proposal I wrote which would statically verify that read-only fields get written exactly once in every possible execution path through every possible constructor and could never be seen in an unwritten state unless a constructor throws but a call to Finalize() resurrects the partially-constructed object.
Where's your proposal?
See https://roslyn.codeplex.com/discussions/544180 ("Primary Constructors--focus on field initializers")
May 7, 2014 at 10:28 PM
supercat wrote:
The usual meaning of ! in C is negation rather than emphasis.
The symbol ! to indicate NonNullable<T> would appear in context with a type name. myclass! =

So is disambiguous to the meaning of boolean not ! which is in context with a BoolExpression . if( !(saved) ) { }


Like in Visual Basic the = symbol means assignment or equality depending on it's surrounding context.
result = x = y
First is assignment, second is equality.


If we could express Generic Operators
 prefix operator ( ! ) <T : Class> ( obj : T  ) : boolean  { return obj != null   }
 prefix operator ( ! ) <T>         ( obj : T? ) : boolean  { return obj.HasValue  }
 prefix operator ( ! ) <T : Class> ( obj : T  ) : boolean  { return obj == null   }
 prefix operator ( ! ) <T>         ( obj : T? ) : boolean  { return !obj.HasValue }
postfix operator ( ! ) ( x : Int ) : Int { return Factorial( x )  }

operator ( <   ) <T : IComparable<T>> ( a : T, b : T ) : boolean  { return a.CompareTo(b) <  0 }
operator ( <=  ) <T : IComparable<T>> ( a : T, b : T ) : boolean  { return a.CompareTo(b) <= 0 }
operator ( >   ) <T : IComparable<T>> ( a : T, b : T ) : boolean  { return a.CompareTo(b) >  0 }
operator ( >=  ) <T : IComparable<T>> ( a : T, b : T ) : boolean  { return a.CompareTo(b) >= 0 }
operator ( !=  ) <T : IComparable<T>> ( a : T, b : T ) : boolean  { return a.CompareTo(b) != 0 }
operator ( ==  ) <T : IComparable<T>> ( a : T, b : T ) : boolean  { return a.CompareTo(b) == 0 }
May 7, 2014 at 10:48 PM
Edited May 7, 2014 at 10:49 PM
dotnetchris wrote:
We need to fix the billion dollar mistake. C# is the best language. It shouldn't be a mine field of null reference exceptions.
Thanks for the motivation. I was feeling nobody else had NullReferenceExceptions everywhere...
May 7, 2014 at 10:48 PM
AdamSpeight2008 wrote:
So is disambiguous to the meaning of boolean not ! which is in context with a BoolExpression . if( !(saved) ) { }
Like in Visual Basic the = symbol means assignment or equality depending on it's surrounding context.
I recognize that from a compiler standpoint there would be no ambiguity between !foo and (!)foo, but to me their appearance would suggest they should have similar meanings. Personally, I've never liked C's syntax for casting.
If we could express Generic Operators
I'm not really quite clear what your syntax is trying to say, aside from your being cute with factorial. I would have expected that the (!) operator as proposed would assert that its operand was non-null, and let the compiler make use of that assertion. Did you intend it to mean something else? As the your relational-comparison and equality-comparison operators, I see no reason to add to the already-horrific inconsistencies in those tokens' behavior.
May 7, 2014 at 11:03 PM
I think the idea of AdamSpeight2008 was to:
  • Clarify the non-ambiguity of (!) casting operator
  • Expand the power of operator overload to allow declaring generic operators, and operators outside the c# grammar. I know there are some languages that support it. Haskell?
  • Suggest a fix to the redundancy between Equals, IComparable and operation overloading.
But at the same time, he creates a lot of confussion because:
  • The casting operators should not return boolean
  • The syntax for the operators is something between C# and VB. We're in a C# compiler forum, syntax matters!
  • He touches way to many things, missing the point
May 7, 2014 at 11:31 PM
Edited May 7, 2014 at 11:34 PM
Simplifying Null and NonNull checks.
mymethod ( foo x, foo y )
{
 /* Do both contain a instance */
 if( !x && !y )
 {

 }
}

mymethod ( foo x, foo y )
{
 /* Are both null */
 if( ?x && ?y ) { throw someexception }

}
Having the ability to express Generic Operators simplified the source code and size of the .net libraries.
Eg to "gain" comparison operator a type would only to have implement IComparable<T>

Say you wanted to extended the System.Math with additional
postfix operator ( ² ) ( x : Double ) : Double   { return  x * x  }
postfix operator ( ³ ) ( x : Double ) : Double   { return  x * x * x }
        operator ( √ ) ( y : Double, x : Double ) { return Math.Power( x, (1/y))

var  C = 2  √ ( A² + B²) 

postfix operator ( ° ) ( deg : Double ) : Degree  { return New Math.Angle.Degree( deg ) }

postfix operator ( % ) ( percent : Double ) : Percent { return New Math.Percent ( percent ) }
        operator ( ± ) ( value : Double, percent : Percent ) : ValueLimits 
 { 
   var x = value - percentage
   var y = value + percentage
   return new ValueLimits( x, y )
  }
operator ( WithIn ) ( value : Double,  limits : ValueLimit ) : Boolean { return  (limits.lower <= value ) && ( value <= limits.upper ) } 


var value = 88
var result = value WithIn 100 ± 20%
Syntax is Nemerle - esque. ( identifier type > type identifier )
May 7, 2014 at 11:50 PM
Also if the meaning of "comparision operators" meaning something different in the context of your class / structure.
Implement them in your class / structure, as they will bind more strongly than the generic default of comparision,

Also it would good to lift the restriction that they have to be define in complementary pairs < >= <= > == !=. Why?
You're gonna implement that if you require it anyways.

I can think of examples that don't have complements. Currently I have to "pollute" my code with an operator that is callable and throw a not implemented exception.
When it should be a option in the first place.
May 7, 2014 at 11:59 PM
AdamSpeight2008

Really interesting how powerful nemerle is for operator overloading! Specially for comparison operators. If you post a new thread I will support it

For operators outside of c# syntax, I wouldn't buy that. There are two many rooms for ambiguities if you give this power to the users, or they will come up with strange Unicode symols and well need Character Map utility all the time.

As for using ! And ? returning bool, I don't like it. Many languages like JavaScript concert null to false and everything else to true, so:
if(a)
if(a!=null)

if(!a)
if(a== null)
While in you case if(!a) will mean exactly the opposite!.

Also, the number of characters to save constant 6 == null while for casting can be huge for generic classes, or just mandatory for anonymous classes.
May 8, 2014 at 12:30 AM
But in vb.net it's 14
If node IsNot Nothing Then
Pre-Roslyn preview it was 19
If Not node IsNot Nothing Then
With ! operator it 1 character
If !node Then
May 8, 2014 at 12:44 AM
AdamSpeight2008 wrote:
Also if the meaning of "comparision operators" meaning something different in the context of your class / structure.
Implement them in your class / structure, as they will bind more strongly than the generic default of comparision,
Operators are overloaded statically, unlike methods which are overridden virtually. This results in odd behavior when an overload exists for both a type and one of its subtypes, unless the subtype's overload is semantically equivalent to that of the base type but simply executes faster.
Consider:

void blah<T>(T x, T y) where T:class { return x==y; }
What would you expect blah("5", 5.ToString()) to return? If your propsed overloads were added and T was changed toIComparable<T>, what would you expectblah(double.NaN, double.Nan);` to return?
May 8, 2014 at 12:54 AM
blah is a void returning method (aka Subroutine in vb parlance) so doesn't return anything!
May 8, 2014 at 1:08 AM
Fine. void blah etc.
May 23, 2014 at 6:41 PM
Have you heard of spec#? It is an experimental language based on c# with a few additions. Code contracts very much grew out of it. And most importantly, it has T! with almost the same semantics we are discussing here. I can't recall right away how they deal with T![], but they have a solution for it.
May 24, 2014 at 9:14 AM
Edited May 24, 2014 at 12:26 PM
TanveerBadar wrote:
Have you heard of spec#? It is an experimental language based on c# with a few additions. Code contracts very much grew out of it. And most importantly, it has T! with almost the same semantics we are discussing here. I can't recall right away how they deal with T![], but they have a solution for it.
Here is a nice tutorial:
http://www.codeplex.com/Download?ProjectName=specsharp&DownloadId=84056

It looks like string == string!, and the only way to write a nullable string is string?. But ! is more useful on generic types:

In a List<string>, T == T! == string! == string while T? == string?
In a List<string?>, T == T? == string? while T! == string! == string

I don't know if this level of flexibility is actually useful using generic types, I would like to know some examples.

The important thing is thatSspec# took the formal path: not-nullable is the default so you don't need to write horrible code like Dictionary<string!, List<string!>!>!.

They also don't say anything regarding attributes, rudiment checks and interoperability with other languages.

All this decisions make spec# easy to understand and nice to write, but will break all existing c# code.
Jun 17, 2014 at 3:15 PM
*Olmo wrote:
Another important thing is whether to allow List<string!> or not. I think we should, and move the validations to the client code when using Add, ElementAt, or the indexer, but do not validate deeply when assigning List<string> to List<string!>. This information should also be encoded in the attribute.
I think this would break the hypotetical NotNullable<T> Type if CLR would adopt it. Most importantly, with introduction of NonNullable the code would behave differently (the deep validation would be there) after recompilation and that would be very bad.
In order to make casting faster to non-nullable and nullable types, I've also proposed (!) and (?) operators.
This depend on string! being type. And it can't be type and be compatible across all CLR languages.

Adding an attribute will be necessary not only for future proof. Also to provide good IntelliSense on other languages, like VB, or when you only need the Assembly.
If it would just generate not null checking as i proposed it will be automatically compatible wit all CLR languages (and future proof). And the attribute would be only for better IDE experience. (And maybe for some optimization)
My whole point was to introduce minimal set of features that would be compatible without any changes in CLR and that would not break anything with introduction of real CLR based NotNullable type and I can't think of more features like this.

You are just adding features that seems problematic to implement cleanly without CLR changes. And that would not happen in near future. :)
Jun 19, 2014 at 12:37 PM
Taking a look recently to Apple Swift (https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/Swift_Programming_Language/OptionalChaining.html#//apple_ref/doc/uid/TP40014097-CH21-XID_312) I feel a little bit of envy: It looks they get nullability right.

Swift has no distinction between reference and value types regarding nullability. Both are non-nullable by default. And in order to reference nullable types member you have to put a '!' (throw exception if null) or a '?' (propagate nullability).

I'm becoming confused now. ttxman looks concerned about future compatibility but in the CLR level but I think the language is more important, I don't think it makes sense to have a NotNullable<T> at the CLR level at all:
  • If we go the hacky way, NotNullable<T> should be just a compile-time invention, as I proposed. Prepare for zillion breaking changes though.
  • If we go the Swift way, NotNullable<T> should be the default state, and we'll need to add Nullable<T> at the CLR level.
Depending what the future will be, even the smaller set of features that ttxman proposes will contaminate type annotations with redundant '!' symbols. I just don't want to write Dictionary<string!, List<string!>!>!, I want this semantics by default.

One thing that worries me is that, in Swift, in order to have non-nullable fields, they have to be stetted in the constructor or field initialize.

For classes representing entities that go to the database, I want the entities to have non-nullable strings and references, but while they are being constructed, before saved for the first time, the only thing that makes sense is that this values are null.

We use object initializers instead of constructors, and more important, we bind new entities in HTML5/XAML user interfaces. The non-nullability of value types becomes more annoying in this case, like having to make a Gender property nullable just so null is allowed for the first time, even if once saved Gender will be always non-nullable.
Jun 21, 2014 at 7:28 AM
What's needed fundamentally is a distinction between storage locations that are used to identify objects, versus those which are used to encapsulate as values information held by objects. Storage locations that encapsulate values, regardless of whether they're value types or reference types, can generally have a sensible default initialization; those which encapsulate identity, however, have no sensible default other than null. I really wish C# had been designed a bit more like C++ with its distinction between references and values. Not going quite so far as C# with things like copy and move constructors, but providing the necessary tools to automate cloning and compare operators in most common cases.
Jun 29, 2014 at 10:29 PM
I have clocked a few thousand lines in Swift, and it works well with your own code where you can control it. However, you run into a wall of compromises whenever you have to touch the Cocoa framework (the BCL equivalent; Swift has a very small base runtime of a few generically typed collections, bridged to the Cocoa untyped equivalents) and it's not half as nice.

They invented the "implicitly unwrapped optional" (used as type! instead of type?) which is an optional where every operation besides checking if there's a value there implicitly unwraps the value. It eases the programmer pain of working across the boundary but it doesn't give you any of the flowing "purity of nullness" back.

While Swift didn't get a clean slate, it is enormously satisfying when you can work with code that is clean-slate code, especially with optionals being a core concept in the language. ?. exists and is called "optional chaining" and works wonderfully, especially with if let x = which gives you a scope in which x is defined and definitely not null. This extends to matching multiple values in this form in switch cases for very powerful pattern matching.

Enough about Swift, except to note that they took the right default by having the bare type names be non-nullable and types of every type (structs, enums and classes) have the benefit of being non-nullable.

If C# 7 was to come in a clean slate version where nullability was to be rethought (it's the season for reboots across tooling and platforms and I know I'd switch), my advice would be to just crib Swift's model completely.
Jun 30, 2014 at 4:56 PM
JesperTreetop wrote:
If C# 7 was to come in a clean slate version where nullability was to be rethought (it's the season for reboots across tooling and platforms and I know I'd switch), my advice would be to just crib Swift's model completely.
If doing a reboot of a framework, I think reference-type storage location types need to separate out the type of the thing to which a reference refers from the kind of reference. If one is going to be expand the type system to distinguish List<nonnullable reference to Animal> and List<arbitrary reference to Animal>, as well as the types of their backing stores, and have it be possible for code to retrieve an item from the former type and pass it directly to code which requires a non-nullable reference, then one may as well expand it to incorporate a variety of other aspects of references as well, including the one I would consider most important: object ownership. I would regard all four of the following as being sufficiently different that they should have different types, even though they all identify the same kind of heap object.
  1. A reference to a System.Int32[] whose contents will never be changed, and may freely be shared only with code that will never modify the contents thereof.
  2. A reference to a System.Int32[] which is owned by the holder of the reference, and may be freely changed by its owner, but may not be shared with any entity not owned by the owner of the int32[].
  3. A reference to a System.Int32[] which is owned by the owner of the entity holding the reference, and may not be shared with anyone but that owner or other things owned (possibly recursively) by that owner.
  4. A reference to a System.Int32[] which is owned by some outside entity for "public" benefit (most likely sharable only with code that won't write to it, but which--unlike in case #1 above--may be written by the owner).
Not all real-world ownership scenarios can be modeled in a reasonable type system, but the common ones can. At present, there is nothing in the type system to indicate whether an object method with return type IList<T> is returning a new list to which the recipient could if desired claim ownership, an sharable immutable list, a guaranteed-live view of a list which might change, an at-least-temporarily-valid view of a list whose behavior would be unspecified if the underlying data changes, or something else. Code which expects one of the above is unlikely to work if given another, but there is nothing in the type system to indicate which of the above is required in any particular scenario.
Jul 31, 2014 at 3:23 PM
Those who get rid of null are doomed to reinvent it with magic values, but will lose automatic run-time checks. :)

The problem with null isn't the exception that is thrown, it's the lack of an appropriate default value having been assigned when the variable is accessed. The appropriate value may not be the same in each case, and simply paving over nulls with default values could result in just as many bugs, just hidden from view.

For example, if I create a DateTime and forget to assign a value, it has a default value of DateTime.MinValue. While this won't throw an exception, it may create all sorts of undesired behavior in my application because the app is expecting a more reasonable value.

We already have a simple coalesce operator, and assuming we're getting the Elvis operator, null-checking will be even easier to implement.

So, rather than changing the runtime, why not instead use an attribute to mark objects (or classes?) that we never want to be null, and have Roslyn warn us when some assignment to them could result in a null, so steps can be taken to check and coalesce them to a better value?

Another option might be to allow classes to overload the result of default(T) with their own appropriate default value, and have the compiler coalesce them automatically. At least then we're not changing the runtime. There are pros and cons to having the type decide whether it is effectively nullable or not rather than making that a variable-level decision, and some logical issues when making heavy use of interfaces, but I suspect this would still be a lot easier to implement.

Going further, if the default idea is good but it needs to be a variable-level decision, we could create a syntax for defining the intended default for a given variable, and coalesce to that value every time it is assigned a new value. Possible example:
public MyClass x ?? (new MyClass() { name: "Unspecified" });
What comes after the ?? would be both the initial assignment (just like =) and the value the compiler automatically coalesces it to after future assignments.
Jul 31, 2014 at 5:10 PM
richardtallent wrote:
Another option might be to allow classes to overload the result of default(T) with their own appropriate default value, and have the compiler coalesce them automatically. At least then we're not changing the runtime. There are pros and cons to having the type decide whether it is effectively nullable or not rather than making that a variable-level decision, and some logical issues when making heavy use of interfaces, but I suspect this would still be a lot easier to implement.
The runtime assumes that the default value for a storage location that occupies n bytes will be n bytes of zeroes. Changing that would be a major change.

It would have been possible for immutable classes to have sensible default values if compilers had been forbidden from using callvirt to invoke non-virtual functions. In that case, an implementation of String.Length could have looked something like:
[AllowNullInvoke()] int Length()
{
  if (this == null) return 0;
  return _length;
}
and Object.Equals() could have been implemented something like:
protected virtual VirtEquals(Object other)
{
  return false; // Note that we only get here if we're not reference-equal, so no need to re-test.
}
[AllowNullInvoke()] bool Equals(Object other) // NON-virtual
{
  if (this == other) return true;
  if (this == null) return false;
  return VirtEquals(other);
}
I understand the desire of C# designers to generally guard against a null this, but IMHO that should have been the responsibility of the called function (e.g. have a rule that if a non-virtual method isn't tagged with AllowNullInvoke, the compiler will auto-generate
if (this==null) throw new NullInvocationException( <name of function> );
before any other statements. Such a design would improve the usefulness of the diagnostic (unlike the present NRE, it would indicate what method was being invoked upon the null reference), and would also ensure that methods which wouldn't be prepared for a null this would never have to deal with one, even when invoked via Reflection or from a language which allows instance methods to be called upon null objects.

Alas, since C# didn't provide a means of invoking instance methods on null objects, that pattern doesn't work.
Aug 1, 2014 at 4:44 PM
Olmo, I really like your proposal. I think it would work. Attributes seem like the best way to do it, from a compatibility perspective.

A lot of this is very similar to Apple's Swift language...
  • Non-nullable class T in Swift, T! in .NET - you can safely use the . operator
  • Nullable class T? in Swift, T? in .NET - you are required to use the ?. operator
  • Convenience class T! in Swift, T in .NET for back-compat - you can use . operator but it might throw NulLReferenceException
  • Conversion from an expression expr of type T? or T into T! would be written expr!, using a bang operator to do forced unwrapping, which might throw a NullReferenceException. Olmo, this the syntax you proposed for T -> T!
Swift has the same problem that VB/C# has, of a vast legacy of existing frameworks and libraries which return nullable classes. For instance even the Swift "string.replace" is declared in the framework that it might return null (because it was written prior to the introduction of non-nullable types). The solution in Swift that the compiler has baked-in knowledge about how to project certain core framework types, e.g. it knows that string.replace will never return null. I would hope for something similar in C#, where we could "annotate" third-party library functions with non-nullability information, so we can continue to use them even if they haven't yet been rewritten.

Swift actually has worse problems than VB/C#. That's because traditional Objective-C developers are used to the fact that, if you have variable x which might be null, then the call to x.foo() will just silently fail in the case that x is null. So they are used to never needing null-checks in their code, and will be in for an unpleasant shock when they start using them. By contrast, VB/C# developers already face NullReferenceException in that situation, so won't be faced with the same shock.


Anyway, I agree with you Olmo: this could be done entirely in the compiler, with attributes, using compile-time checking, and without CLR support.
Aug 2, 2014 at 3:13 AM
Edited Aug 2, 2014 at 4:19 AM
Hi jlq1004! Thanks four your support, and for taking the time to read this long conversation :)

I've to admit that, since I've learned how null-ability works in Swift, I've been a little bit in 'surrender mode' about my non-nullable proposal for C#.

While I think my proposal will work, and will be backwards compatible, I'm not sure if too much attention will have to be put on nullability due to the value-type vs reference-type asymmetry that will ruin the whole experience.

This are my current concerns:

1.- I just hate writing Dictonary<string!, List<string!>!>!. it looks like I'm a teenager asking for attention.
2.- Writing generic code that wants to returns nulls is horrible in C#, that's why Dictionaries don't have a TryGet method that returns nulls when the key is not found. You always have to split the case for classes and for structs.

Currently you have to write:
public T TryGet_ClassVersion<K,V>(this Dictionary<K, V> dictionary, K key) where V: class
{
    T result; 
    if(dictionary.TryGet(key, out result))
        return result; 
 
    return null;
}

public T? TryGet_StructVersion<K,V>(this Dictionary<K, V> dictionary, K key) where V: struct
{
    T result; 
    if(dictionary.TryGet(key, out result))
        return result; 
 
    return null;
}
The return type is the only difference :S.

Adding T! will make things, if not worst, just as bad. let's try:
Dictionary<string!, string!>! dic = new Dictionary<string!, string!>!();
dic.Add("apple", "red");
string! color1 = dic.TryGet_Class("pear"); 
string!? color2 = dic.TryGet_Struct("pear"); 
The first try calls the class overload, but this method is going to be checked for non-nullability to keep type safety, missing the point.
The second try won't compile, because is invalid to create a Nullable<T> of a reference (non-nullable) type. Of course!

All this problems doesn't happen in Swift.

I will consider a new mode, option symmetric at the top of the C# file to bring C# in a world where T means non-nullable even for reference types and T? means nullable even for reference types. This option, similar to option strict in Visual Basic, will be new so backwards compatibility won't be an issue, if you want to use it you have to update this file. I hate the idea of having two modes in C#, but it's the one billion $ problem...
  • From the compiler/tooling point of view that should be doable, I'm guessing, but is just one mode more to consider everywhere, watcher, immediate windows, razor views future REPL...
  • From the runtime/code generation part it will be more tricky. T? should have a Value and HasValue properties, even for string? that is exactly string to preserve binary compatibility. So any normal reference, accessed through a T? in symmetric mode should be accessed through Value property and checked in HasValue property, even if this properties do not exists! (I assume using reflection you will notice the trick).
The idea is to be able to write this method:
option symmetric

public T? TryGet<K,V>(this Dictionary<K, V> dictionary, K key)
{
    T result; 
     if(dictionary.TryGet(key, out result))
        return result; 
 
    return null;
}
And use it like this:
options symmetric;
 
Dictionary<string, string> dic = new Dictionary<string, string>();
dic.Add("apple", "red");
string? color2 = dic.TryGet("pear"); 
In order to make it work, we'll also need to relax the restrictions on Nullable<T> to allow nesting. So the algebra will be:
//Symetric mode    -> Asymmetric mode
int       -> int
int?      -> int?
string    -> string! 
string?   -> string
string??  -> string?

//And for generics:
T  -> if (T is struct) T
      if (T is class) T!
      if (T is a faked nullable type of M) M
T? -> if (T is struct) T?
      if (T is class) T
      if (T is a faked nullable type of M) M?
So the basic idea is that Nullable<T> should be nesteable (for genericity), and the first Nullable<T> where T is a reference type should be optimized away to save memory and keep binary compatibility.

The conversion could be done by the compiler and showed in the intellisense. Still, many programmers will struggle with such complexity.
Aug 2, 2014 at 4:46 AM
option symmetric

public T? TryGet<K,V>(this Dictionary<K, V> dictionary, K key)
{
    T result; 
     if(dictionary.TryGet(key, out result))
        return result; 
 
    return null; /* <-- What is "value" does this have? */
}
What does the function return if the key can do not be found? If your symmetric world null isn't a valid value.
Aug 2, 2014 at 12:11 PM
Edited Aug 2, 2014 at 4:08 PM
Here are examples for all the possibilities:
int? val = new Dictionary<string, int>().TryGet("foo");
int?? val = new Dictionary<string, int?>().TryGet("foo"); //nested normal nullable with two booleans
string val = new Dictionary<string, string!>().TryGet("foo");
string? val = new Dictionary<string, string>().TryGet("foo"); //Nullable for reference type. Boolean + null value
And in symmetric mode.
option symmetric;
int? val = new Dictionary<string, int>().TryGet("foo");
int?? val = new Dictionary<string, int?>().TryGet("foo"); //nested normal nullable with two booleans
string val = new Dictionary<string, string?>().TryGet("foo");
string?? val = new Dictionary<string, string??>().TryGet("foo"); //Nullable for reference type. Boolean + null value
Both code snippets compiles to exactly the same thing and both are valid c#. The first one has backwards compatible syntax. The second one is nicer.

Both are binary backwards compatible.

So answering your question.
What does the function return if the key can do not be found? If your symmetric world null isn't a valid value.
In my symmetric world null is completely valid on variables of type T?. Independently of T being a value or reference type.
Aug 2, 2014 at 4:32 PM
Olmo wrote:
I will consider a new mode, option symmetric at the top of the C# file to bring C# in a world where T means non-nullable even for reference types and T? means nullable even for reference types. This option, similar to option strict in Visual Basic, will be new so backwards compatibility won't be an issue, if you want to use it you have to update this file. I hate the idea of having two modes in C#, but it's the one billion $ problem...
After a=new T[1], what will be the value of a[0]? What will be the value of any new array elements created via Array.Resize()?
Coordinator
Aug 2, 2014 at 7:12 PM
Olmo wrote:
1.- I just hate writing Dictonary<string!, List<string!>!>!. it looks like I'm a teenager asking for attention.
2.- Writing generic code that wants to returns nulls is horrible in C#, that's why Dictionaries don't have a TryGet method that returns nulls when the key is not found. You always have to split the case for classes and for structs.
Answer1: going back to basics,
String!   // this is a non-nullable string
String?   // this is a nullable string
String    // Nullable "String?" but you're not forced to ?. off it, since s.Length stands for s!.Length

Guid!   // this is a non-nullable struct
Guid?   // this is a nullable struct
Guid    // Non-nullable "Guid"
In each case the third option isn't tied in stone. It's just a convenience that we defined the way it should be, for convenience. I'd keep this "convenience" foremost:
Dictionary!<string, List<string>>>  // implicit shorthand for Dictionary!<string!, List!<string!>>>
Dictionary!<string, List<string?>>>  // implicit shorthand for DIctionary!<string!, List!<string?>>>
In other words: I believe that when people have non-nullable "outer" generic types, they most commonly want the inner ones to be non-nullable as well, so we might as well adopt that convention. Users can always "escape out of it" by putting the ? explicitly inside.




As for point (2), there must be milder alternatives. For instance we can already write the equivalent of Swift's "if let"...
int? x0 = null, x1 = 15;
string s0 = null, s1 = "hello";
if (x0.IfLet(out var x)) { Console.WriteLine("x0:{0}",x);}
if (x1.IfLet(out var x)) { Console.WriteLine("x1:{0}",x);}
if (s0.IfLet(out var s)) { Console.WriteLine("s0:{0}",s);}
if (s1.IfLet(out var s)) { Console.WriteLine("s1:{0}",s);}



static bool IfLet<T>(this T e, out T v) where T:class {
    v = e; return (e != null);
}

static bool IfLet<T>(this T? e, out T v) where T : struct {
    v = e.HasValue ? e.Value : default(T); return e.HasValue;
}
I have a thought at the back of my mind that the remaining overload things could be solved by changing the CLR to allow overloads based on generic constraints, but I haven't yet got concrete thoughts...
Aug 3, 2014 at 10:50 AM
lwischik wrote:
In each case the third option isn't tied in stone. It's just a convenience that we defined the way it should be, for convenience. I'd keep this "convenience" foremost:
Dictionary!<string, List<string>>>  // implicit shorthand for Dictionary!<string!, List!<string!>>>
Dictionary!<string, List<string?>>>  // implicit shorthand for DIctionary!<string!, List!<string?>>>
In other words: I believe that when people have non-nullable "outer" generic types, they most commonly want the inner ones to be non-nullable as well, so we might as well adopt that convention. Users can always "escape out of it" by putting the ? explicitly inside.
Pedantic note: Shouldn't the ! marks be at the end of the type declaration. So Dictionary<string!, List<string!>!>! instead of Dictionary!<string!, List!<string!>>. Your looks definitely easier to parse though.

I disagree in the 'the way it should be'. String shouldn't be string? in a perfect world, it's just convenient for historical reasons.

I'm not sure if I like your idea of the compiler automatically adding ! to my code based on 'heuristics' like generic parameters. What about generic methods, will it do it too? If not LINQ will get really confusing.

I agree that 90% of the times you're right and nulls are not common values in generic methods. But right now this is valid C#:
List<string> list = new List<string>();
string str = null;
list[0] = str; 
So your proposal is a breaking change. What error message are you going to show? "Impossible to convert string to string" or "Impossible to convert string to string!"?

And what about this code:
Lazy<string> nameLazy = new Lazy<string>(()=>ComplexCalculation()? "me" : null);
This code is suddenly breaking, while for generics that return just one value, instead of a list, your non-nullable default is much more questionable.

I'm assuming a high bar in backwards compatibility, since you're an intern, maybe you know if the C# team has become more liberal :).
As for point (2), there must be milder alternatives. For instance we can already write the equivalent of Swift's "if let"...
int? x0 = null, x1 = 15;
string s0 = null, s1 = "hello";
if (x0.IfLet(out var x)) { Console.WriteLine("x0:{0}",x);}
if (x1.IfLet(out var x)) { Console.WriteLine("x1:{0}",x);}
if (s0.IfLet(out var s)) { Console.WriteLine("s0:{0}",s);}
if (s1.IfLet(out var s)) { Console.WriteLine("s1:{0}",s);}



static bool IfLet<T>(this T e, out T v) where T:class {
    v = e; return (e != null);
}

static bool IfLet<T>(this T? e, out T v) where T : struct {
    v = e.HasValue ? e.Value : default(T); return e.HasValue;
}
I have a thought at the back of my mind that the remaining overload things could be solved by changing the CLR to allow overloads based on generic constraints, but I haven't yet got concrete thoughts...
I like the IfLet method! But is just one of the few examples where overload resolution works in this cases. If the nullable is in the return type instead of a parameter then you need overloads on generic constraints. Pretty much any code returning default(T) (FirstOrDefault, SingleOrDefault) is already a workaround to this problem.

Overload on generic constraints will relieve some pain on the client code, but the library will still have some code duplication. Not a big deal for this simple methods, but since is based on overloads your solution is viral: Any method using it will also need to be splitted in many versions, one for classes, one for structs, and if nested nullables are not allowed, another one for nullables.

The overload card won't save us if we have this problems with generic types instead of methods. For example we will not be able to write this code:
 class Dictionary<K,V>{
   
     public V? TryGet(K key)   // You have to return a nullable value, and here you can not overload on generic constraints
     {
        ...
      }
  }
Option Symmetric

Now let me promote a little bit more the option symmetric solution:
  • With option symmetric you really forget about the unfortunate history of C# (and Java, C++, and pretty much any other popular language) regarding nullability. In a few years all the code will be option symmetric and the problem will be gone.
  • Even if, at the binary level, string? and int? will be different, if designed from scratch this will also be an interesting optimization to save the boolean for all the reference types.
  • IL code for generic methods is already different for any struct, and shared for the reference types, so there's a perfect spot for hiding the magic of the ghost Value and HasValue properties.
  • Since the compiler will have two modes, the transition could be much simpler, similar to transforming .js to .ts. You add option symmetric to a file and then consider adding ? and ?. where it makes sense, but most of the code shouldn't.
  • If you're lazy you could also use a roslyn refactoring, since is just a simple syntactic transformation, but you'll loose the cathartic opportunity to reconsider your code.
The problem now is that any C# interface have to choose what mode it supports, and if both, what is the default:
  • Immediate windows
  • Watcher
  • Goto Definition on automatically generated code from assemblies
  • Razor views
  • LINQ Pad
  • ...
Hard choice....
Aug 3, 2014 at 7:08 PM
Olmo wrote:
I disagree in the 'the way it should be'. String shouldn't be string? in a perfect world, it's just convenient for historical reasons.
IMHO, there should have been separate types for string objects and string values [perhaps called StringObject and String]. A String should have been a structure type with a single field that encapsulated a reference to an object holding the actual text. The default String value would have held a null reference, but the semantics of the type would specify that a String holding a null reference should behave as a zero-character sequence. A widening (boxing) conversion should exist from String to StringObject, and a narrowing (unboxing) one from StringObject to String. Boxing a non-empty String should have yielded the encapsulated reference, while boxing an empty string would yield StringObject.Empty. Unboxing a non-zero-length StringObject to String would encapsulate it; unboxing a zero-length StringObject would encapsulate a null. Unboxing a null reference to String would be an error.

Such semantics would yield results consistent with the way the earlier Common Object Model handled strings. Although COM string handling was motivated by efficiency factors which aren't relevant in .NET (since code which receives a COM string is required to free it when it is no longer needed, passing around an instantiated "empty string" object would be more expensive than passing around a null reference), having the contents of a String[] default to empty strings would be useful in the same way that it is useful to have the int[] elements default to zero.

I think the biggest reason that String is a class rather than a structure is that until .NET 2.0 there was no mechanism by which it would have been possible to avoid an extra layer of boxing on a structure-type String. If boxing and unboxing logic somewhat like that associated with Nullable<T> had been included in String, however, that would not have been a problem.

PS--I also wish that the distinction between String and StringObject was also available for other value types--that the Type associated with a heap object created by boxing a value type Foo would be distinct from the storage-location type Foo, such that one could declare a variable of type "reference to boxed Foo" which could be null, or could identify a boxed Foo, but was guaranteed not to identify anything that wasn't a boxed Foo. C++/CLI allows such things, and they can be useful, but because it is C++/CLI which is supporting them rather than .NET, they don't work with generics.
Coordinator
Aug 3, 2014 at 9:46 PM
Olmo wrote:
I agree that 90% of the times you're right and nulls are not common values in generic methods. But right now this is valid C#:
List<string> list = new List<string>();
string str = null;
list[0] = str; 
So your proposal is a breaking change. What error message are you going to show? "Impossible to convert string to string" or "Impossible to convert string to string!"?
I don't see the breaking change. My idea is that
  • List<string> is a nullable list with nullable strings inside, and you don't need to ?. off the list or the strings
  • List<string!> is a nullable list with non-nullable strings inside
  • List!<string> is shorthand for List!<string!>, a non-nullable list with non-nullable strings. I imagine this comes from the (quite limited) rule, "if you have an outer generic type with ! on it, then implicitly apply this to everything inside", no more.
I'm assuming a high bar in backwards compatibility, since you're an intern, maybe you know if the C# team has become more liberal :).
I'm not an intern! I've been in charge of VB language design, and on the C# language design team, since 2009 :) And yes, the C# team remains adamantly in favor of back-compat. (VB too but marginally less so... we'd have been okay with making "NameOf" a reserved keyword.)

By the way, my VB experience is what makes me so adamantly opposed to "option symmetric" :)
I like the IfLet method! But is just one of the few examples where overload resolution works in this cases. If the nullable is in the return type instead of a parameter then you need overloads on generic constraints. Pretty much any code returning default(T) (FirstOrDefault, SingleOrDefault) is already a workaround to this problem. Overload on generic constraints will relieve some pain on the client code, but the library will still have some code duplication. Not a big deal for this simple methods, but since is based on overloads your solution is viral: Any method using it will also need to be splitted in many versions, one for classes, one for structs, and if nested nullables are not allowed, another one for nullables.
I agree. Deep down, I suspect generic type parameters would need to work a bit differently. If you just write "f<T>(T x)" -- well, are you required to write x?.g(), or can you write x.g(), and if so is this short for x!.g(). The ?. operator already has some of these issues but only in corner cases, e.g. var x = a?.b isn't allowed if b is of type generic-type-parameter that you don't know if it's a struct or class. Non-nullable types would make the issues foremost.

It feels weird when you have types like Int??. Maybe generic type parameters should just range over types, and not over the nullability of those types as well.

I guess the overload trick I used in IfLet uses overloading+duplication as a sneaky way to write code that operates on "T" regardless of whether "T" means "T?" (for class types which might be null) or "T!" (for structure types which never mean null). That's a weird way to write code.


....

The other thing is, as VSadov said, I can't imagine any solution where non-nullability is rigorously enforced the way it is in Swift. For instance, if you have StrongBox<string!> x and cast it to object and downcast it to StrongBox<string?> then you'd be able to assign null to it, breaking the non-nullability guarantees. Swift doesn't have these because its nullables are enums.

That said, I don't think this is a problem. The goal of non-nullable types should be to make it easy to fall into the pit of success, rather than to eliminate every single loophole. I note that C# inherently already has a loophole that Swift lacks (C# order of initialization of variables and calls to base class), so there are some loopholes.

It reminds me of the difference between returning a List<T> which you merely cast to IEnumerable<T>, vs constructing a new instance of ReadOnlyCollection<T> in order to return it as IEnumerable<T>. This is called a "best practice" but it's motivated by the dodgy belief that you're writing code and you don't trust the rest of your codebase to be reasonable, and you want to fight tooth and nail against callers who attack you by using the "as" keyword, but you don't care about callers who attack you by using reflection. What's the point?

I note too that Swift disallows certain downcasts, e.g. you can't cast from Object to Interface in core Swift (the only way to do it is by marking the interface as @objc so it can do the downcast via heavyweight reflection). Downcasting might be an area where it's okay to relax things a bit.
Aug 4, 2014 at 2:04 AM
lwischik wrote:
I don't see the breaking change. My idea is that
  • List<string> is a nullable list with nullable strings inside, and you don't need to ?. off the list or the strings
  • List<string!> is a nullable list with non-nullable strings inside
  • List!<string> is shorthand for List!<string!>, a non-nullable list with non-nullable strings. I imagine this comes from the (quite limited) rule, "if you have an outer generic type with ! on it, then implicitly apply this to everything inside", no more.
Hey now I get it! It's a nice solution! My pedantic note was not that pedantic after all :) Basically what you're doing is to create some micro option symmetric at the declaration level. Reducing the hassle when it's painful but without creating a bipolar C# where you have to change the way of thinking all the time. It's not a perfect solution but definitely better than mine.
By the way, my VB experience is what makes me so adamantly opposed to "option symmetric" :)
I like the IfLet method! But is just one of the few examples where overload resolution works in this cases. If the nullable is in the return type instead of a parameter then you need overloads on generic constraints. Pretty much any code returning default(T) (FirstOrDefault, SingleOrDefault) is already a workaround to this problem. Overload on generic constraints will relieve some pain on the client code, but the library will still have some code duplication. Not a big deal for this simple methods, but since is based on overloads your solution is viral: Any method using it will also need to be splitted in many versions, one for classes, one for structs, and if nested nullables are not allowed, another one for nullables.
I agree. Deep down, I suspect generic type parameters would need to work a bit differently. If you just write "f<T>(T x)" -- well, are you required to write x?.g(), or can you write x.g(), and if so is this short for x!.g(). The ?. operator already has some of these issues but only in corner cases, e.g. var x = a?.b isn't allowed if b is of type generic-type-parameter that you don't know if it's a struct or class. Non-nullable types would make the issues foremost.

It feels weird when you have types like Int??. Maybe generic type parameters should just range over types, and not over the nullability of those types as well.

I guess the overload trick I used in IfLet uses overloading+duplication as a sneaky way to write code that operates on "T" regardless of whether "T" means "T?" (for class types which might be null) or "T!" (for structure types which never mean null). That's a weird way to write code.
Could it be possible to write just this method
public void V? TryGet(this Dictionary<K, V> value, K key) 
{
}
And behave like this:
int? val = new Dictionary!<string, int>().TryGet("foo");
int?? val = new Dictionary!<string, int?>().TryGet("foo"); //nested normal nullable with two booleans
string? val = new Dictionary!<string, string>().TryGet("foo"); //aka string
string?? val = new Dictionary!<string, string?>().TryGet("foo"); //Nullable for reference type. one boolean and one reference that can point to null.  
I'm happy with double nullables. If you have a Dictionary<string, int?> and you TryGet on it, i think i should be able to differentiate between not in dictionary from not null in dictionary. Same for string?. We can always add a simple flat method to nullables.
public T? Flat(T?? value){
   if(value == null)
       return null;
   
   return value.Value; 
}
So turning back to generics, in sane languages this is not an issue. You shouldn't constraint on a type being nullable. If you need it you can nullify T on the variables/parameters/return types that you need. So maybe we should add your amazing ! on the generic argument to indicate: This class only accepts non-nullable generic arguments as a this parameter.
public class SortedList<K!, V> where K : IComparable {}

new SortedList<int?, Person!>! --> This is a compilation error because int? is not IComparable
new SortedList<string?, Person!>! --> This is a compilation error error because string? is not IComparable

Person? person = new SortedList!<string, Person>().TryGet("hi"); //This works
....

The other thing is, as VSadov said, I can't imagine any solution where non-nullability is rigorously enforced the way it is in Swift. For instance, if you have StrongBox<string!> x and cast it to object and downcast it to StrongBox<string?> then you'd be able to assign null to it, breaking the non-nullability guarantees. Swift doesn't have these because its nullables are enums.

That said, I don't think this is a problem. The goal of non-nullable types should be to make it easy to fall into the pit of success, rather than to eliminate every single loophole.
It reminds me of the difference between returning a List<T> which you merely cast to IEnumerable<T>, vs constructing a new instance of ReadOnlyCollection<T> in order to return it as IEnumerable<T>. This is called a "best practice" but it's motivated by the dodgy belief that you're writing code and you don't trust the rest of your codebase to be reasonable, and you want to fight tooth and nail against callers who attack you by using the "as" keyword, but you don't care about callers who attack you by using reflection. What's the point?

I note too that Swift disallows certain downcasts, e.g. you can't cast from Object to Interface in core Swift (the only way to do it is by marking the interface as @objc so it can do the downcast via heavyweight reflection). Downcasting might be an area where it's okay to relax things a bit.
I agree that some hard-to-find loopholes are better than a massive backwards incompatible solution, or just not solving the problem at all.

Anyway, my solution move the null checks at the out of the arguments so this will issue was embraced. I will even let people cast List<string> to List<string!> (or List!<string> that is the same thing).
List<string> list = new List<string>{"hi", null, "bye"}; 

List!<string> list2 = (List!<string>)list;

list[1]; //InvalidOperationException unexpected null value. 
I note that C# inherently already has a loophole that Swift lacks (C# order of initialization of variables and calls to base class), so there are some loopholes.
That's a really interesting thing! C# does not promote object initialization in the constructor too much, simple constructors and then initializing the properties is a much more common approach. If using databinding and the same ViewModel for creating and editing entities, this could be a problem.

So far, if we have a Sex property we have to make it Sex? so, in the user interface, its shown as - and the user has to choose Male or Female, but then in the rest of the application we wan't to see Sex as a non-nullable and is just inconvinient that it's nullable.

When strings or references to other entities become non-nullable, this problem will be expanded.

But this is a problem for another day :)
Aug 4, 2014 at 4:33 AM
No matter what type declarations we use, the BCL and tons of other code we interact with are still going to return nulls.

The real value of this proposal is to ensure that our own local variables and fields never pass nulls when we'd rather assume a value (like "" for strings, which, other than collections, seems to be the best candidate for this treatment).

So why not just wrap the troublesome reference type in a struct, which already is a value type?

Here's a partial implementation of a "value type" version of String, with implicit conversion to and from strings that auto-coalesces on the way out.

It's easy enough to match String's built-in constructors, methods, and properties (as I've done with Length as an example), or you could access them via the Value property. The former would be more syntactically similar to using a plain old string, but the latter could be applied more easily in a generic solution.
public struct StringValue {
    private string _value;
    public StringValue(string value) {
        _value = value;
    }
    public string Value {
        get { return this; }
        set { this = value; }
    }
    static public implicit operator string(StringValue value) {
          return value._value ?? "";
        }
    static public implicit operator StringValue(string value) {
          return new StringValue(value);
        }
    public long Length {
        get { return _value==null ? 0 : _value.Length; }
    }
}
The only down sides I see to this approach:
  • Any extension methods you've written for String wouldn't be available unless you cast first or use them via the Value property (though you could certainly add overloads for them that take StringValue).
  • There's a tiny bit of additional memory assigned to store the struct, which contains only a reference to the internal reference variable.
Here's a Fiddle to play with it:

https://dotnetfiddle.net/ba1q8D
Aug 4, 2014 at 10:57 AM
richardtallent wrote:
So why not just wrap the troublesome reference type in a struct, which already is a value type?
This trick could only work for string, maybe even array or some other collections. But there's no good default value for a Person, Window, Connection, PropertyInfo, ComboBoxItem, and a looong etc... The only possibility is a NullPerson class that will throw an exception on any access... we already have that and is called null but I don't want it in my method.

Haskell or Swift have already shown the way, let's see how close C# could follow it without breaking everything.
Aug 4, 2014 at 8:23 PM
Olmo wrote:
This trick could only work for string, maybe even array or some other collections.
It could also work via generics for any reference type that has an empty constructor, though you'd have to access the underlying value's properties/methods via the Value property.
But there's no good default value for a Person, Window, Connection, PropertyInfo, ComboBoxItem, and a looong etc...
The addition of a new language feature (particularly one that uses new punctuation or keywords) should be able to provide significant value. So, if my proposal works fine for strings and collections, let's focus on other types of objects that we don't want to silently coalesce to a default value when nulls occur.

Absent automatic coalescing, it seems the only value of this "!" business is to move the null exception closer to the variable's assignment than its first attempted use, and ideally provide both some compile-time and run-time checking.

Code contracts do this, but are wordy and don't appear in the parameter declaration. So maybe that's the problem to solve.

I have to say I'm just not a fan of adding "!" to the end of a variable to mark it as non-nullable. I know Spec# does it, but "!" already means something in C# -- "not". Making it mean something else just interrupts my train of thought reading the code and creates potential ambiguity.

But if somehow we could name a contract (say, as "NotNull" --> "Contract.Requires( x != null )"), I could imagine a more generic means of applying named contracts to variables/parameters using something like this:
public function GetUppercaseName(string!NotNull name) {
   return name.ToUpper();
}
This would be a shorthand for:
public function GetUppercaseName(string name) {
   Contract.Requires(name != null);
   return name.ToUpper();
}
But since we've named the contract we want, we could use it for other restrictions, and could even be chained (whitespace optional):
public int!NotZero Denominator { get; set; }
public int !LessThan(200) !GreaterThan(20) PersonHeightInches { get; set; }   // multiple contracts
public float!(Math.Abs(x) <= 90) Latitude { get; set; }    // Possible example of an anonymously-defined contract
With this approach, we make contracts much less wordy, move them into the declaration, and reuse them. I also like the fact that the "!" is followed by something that makes it clear what we're trying to achieve, we're able to use it to solve more than just nulls. Since it's just shorthand for contracts, we're not inventing a whole new slew of stuff for the compiler, IDE, or runtime to keep track of. And for trivial cases, we can use it with automatic properties to avoid having to deal with backing fields.
Aug 4, 2014 at 10:05 PM
Edited Aug 4, 2014 at 10:06 PM
What is it about? C style programmers that they can't handle context senstive operators? Everything can only have one meaning.

!boolean_variable does meant not, in the case of booleans

typename! means non-null.

No conflict of interest / meaning.

You already have an example in the language ? is that nullable or and inline if? expr<bool> ? true_expr : false_expr;

All operators a context sensitive, there meaning is determined my the type that implements them. It's only a convention the < <= == != >= > mean relation comparision. You as a programmer are free to give them whatever meaning you want. There is nothing in the language specific dictating the return type has to be a boolean.

VB programmer have context sensitivity forever. Maybe they are better?


I like the concept of a static defaul instance.
interface INonNull { }

class example
{  implements INonNull 
  
  static default nonnull () : example { return  ... 
  /* returns a non-null instance of example and enforced at compile-time and runtiime */
 ...

}
or have a new operator?
class example
  static operator nonnull () : example 
Acts like a cast? example(nonnull)

Non-null parameters could operate similar to optional parameters mymethod ( string! x = "" ) , this leads to issue of how do you express option nonnull parameters?
Aug 5, 2014 at 3:00 AM
Edited Aug 5, 2014 at 3:05 AM
AdamSpeight2008 wrote:
What is it about? C style programmers that they can't handle context senstive operators?
lol... I'm BASIC all the way back to programming a TRS-80 MC-10 in my garage as a kid. I've only been using C# heavily for 6 or 7 years. :)

Part of my reticence to this "!" business is actually the fact that VB already has a "!" operator, which operates (sort of) like "." for classes and is used on dictionaries as an unquoted shortcut for Item. So yes, context-sensitive operators are a reality, but there really should be a reluctance to add completely new meanings to old punctuation.
You already have an example in the language ? is that nullable or and inline if? expr<bool> ? true_expr : false_expr;
A fair point, though I'm not a fan of nullable value types either, they seem to have little utility outside interacting with databases, and even there, they tend to just move the problem, because at some point you're probably going to have to use the value in a context that doesn't allow for nulls. But I digress.
I like the concept of a static defaul instance.
Ditto, which is what the proposed workaround for String above was all about.

But more in line with your concept, maybe it could be easier than implementing a new interface? Perhaps, instead, you should designate a variable as non-nullable using an attribute, and as long as the type has an empty constructor, the compiler would adjust every assignment to that variable to coalesce to that constructor, basically converting this:
[NoNull] foo v = GetCurrentFoo();
v = GetAnotherFoo();
into this:
[NoNull] foo v = GetCurrentFoo() ?? new foo();
v = GetAnotherFoo() ?? new foo();
No new types needed, no casting required, no new punctuation use, and the implementer of foo doesn't have to build in support for it (as long as they provide a suitable empty constructor).

But from the response to my previous idea with structs, it seems that the promoters of this "!" idea aren't really looking for automagic defaults, they still want exceptions when nulls are used, they just want them to happen at the time of assignment rather than later on when they try to access instance members or pass the null to something that can't handle it.

I think contracts are a better way to do that, which is why I proposed a hybrid concept of using the "!" operator as shorthand to add contracts to a given local variable, settable property, or parameter. This would give the non-nullable advocates a syntax that is close to their ideal while providing a sweeter way in general to add code contracts.
Non-null parameters could operate similar to optional parameters mymethod ( string! x = "" ) , this leads to issue of how do you express option nonnull parameters?
A non-null optional parameter could be expressed a different way:
public void FooPrinter(foo x =?? new foo("no foo to give")) {
   Console.WriteLine(foo.ToString());
}
I find these semantics to be easier to grok, because my poor BASIC-turned-C# brain already has neurons allocated to recognizing "=" and "??", so composing them into the concept of an optional value that is applied if the argument is either not provided or is null makes the intent reasonably intuitive.

This would, however, require some reworking of how optional parameters are currently handled, since you can't even use constant values as defaults currently, much less expressions that would invoke constructors or dig up a default instance some other way.
Coordinator
Aug 5, 2014 at 3:06 AM
richardtallent wrote:
But from the response to my previous idea with structs, it seems that the promoters of this "!" idea aren't really looking for automagic defaults, they still want exceptions when nulls are used, they just want them to happen at the time of assignment rather than later on when they try to access instance members or pass the null to something that can't handle it.
I wouldn't say that at all! I think the whole point of non-nullable types is to get compile-time errors rather than runtime exceptions. (for the bit of your code that uses nullable types). That's the point of all type systems.

Of course there'll always be the possibility of exceptions at the bridge between the nullable and non-nullable bits of your code. But they're localized.
Aug 5, 2014 at 5:38 AM
Edited Aug 5, 2014 at 5:51 AM
The dictionary literal has be after instancename the proposed non-nul literal has to follow a typename So no conflict. Also the key after the dictionary literal has to be known at compile-time.

The issue I have with an attibute approach is, to support it in the clr would involve reflection on every access of a nonnull. Butwe still have the issue of initilising a nonnull.

Could we add a new instruction to the Clr vm?
Or redo the clr to include nonnull?
Aug 5, 2014 at 7:04 AM
lwischik wrote:
I think the whole point of non-nullable types is to get compile-time errors rather than runtime exceptions.
With the change being suggested, value and reference types can both be declared in nullable and non-nullable forms. Sure, this requires the Nullable<T> wrapper under the hood for nullable value types, and there are still default behaviors for value vs. reference types. But overall, from the developer's perspective, "nullability" is no longer really about value vs. reference types, and it's not really a feature of the type either; it's a contract they choose to be enforced on a particular variable.

Since C# now has code contracts (complete with some static analysis) that are perfectly capable of null checks and much more, but they are only available via bulky method calls, I'm just suggesting that it would be a bigger win if the same language feature we add for null issues also provides the ability to enforce other contracts.

In other words, if we must now deal with more "!" symbols in a different context, I'd like more buck for the bang. :)
Aug 5, 2014 at 7:36 AM
Edited Aug 5, 2014 at 7:37 AM
AdamSpeight2008 wrote:
The dictionary literal has be after instancename
I'm not saying that there's ambiguity for the compiler. What I'm saying is that developers are human, and using symbols that already appear around types and instances for yet another meaning (and one that is pretty important) is unnecessarily obtuse.

That's why I'm suggesting !NotNull (after the type name or the instance name, I don't have a preference) instead. I don't think it's an obscene amount of additional typing, it's clearer in intent to less-experience developers, and it opens a world of other contracts that could be specified in the same way.
The issue I have with an attibute approach is, to support it in the clr would involve reflection on every access of a nonnull. Butwe still have the issue of initilising a nonnull.
No CLR support needed. The attribute would merely tell Roslyn to transform the source code at every assignment to that variable, adding a coalesce operator and default value after the assignment expression. Something similar would happen for parameters--a coalescing assignment would be added to the beginning of the method body. Public fields would be more of a challenge, it might have to rewrite them as properties so it can control all setting coming in from outside the assembly.
Aug 5, 2014 at 11:36 AM
richardtallent wrote:
That's why I'm suggesting !NotNull (after the type name or the instance name, I don't have a preference) instead. I don't think it's an obscene amount of additional typing, it's clearer in intent to less-experience developers, and it opens a world of other contracts that could be specified in the same way.
I was complaining for:
Dictionary<string!, List<string!>!>!
but with your solution looks much better
Dictionary<string!NotNull , List<string!NotNull >!NotNull >!NotNull 
A fair point, though I'm not a fan of nullable value types either,
How can someone be against nullables and not-nulalbles at the same time?
Aug 5, 2014 at 4:16 PM
Olmo wrote:
Dictionary<string!NotNull , List<string!NotNull >!NotNull >!NotNull
A bit contrived. Dictionary<> already requires a non-null key, generic collections in particular could implement no-null-value versions with no change to the language, and nested collections like this should probably be implemented as a multi-value dictionary instead so empty Lists are instantiated as needed (which "!" would not solve).

Also, didn't you just say this above?
I just hate writing Dictonary<string!, List<string!>!>!. it looks like I'm a teenager asking for attention.
I agree. :)
How can someone be against nullables and not-nulalbles at the same time?
As for nullable value types (Nullable<T>), I haven't found them to be useful, despite the fact that my work is mostly database-driven web apps. I'm not on a warpath against them or anything, I just didn't find the addition to the language solved anything other than some conundrums some people were having maintaining symmetry between null values in databases and the values in their corresponding variables. Besides, if nullable references are considered harmful ("billion dollar mistake" and all that), then the addition of nullable value types probably shouldn't be lauded.

I'm not against non-nullable reference types, only the terse syntax suggested (or any changes to the CLR, which I understand your solution avoids). But, it seems I may be the only one who doesn't care for the bangs.
Aug 5, 2014 at 4:37 PM
A bit contrived.
At least for the kind of code we do is not. Just the first file I opened:
static Expression<Func<FileDN, WebImage>> WebImageFileExpression;  //Ideal
static Expression!<Func<FileDN, WebImage>> WebImageFileExpression;  //lwischik
static Expression<Func<FileDN!, WebImage!>!>! WebImageFileExpression; //Olmo (old)
static Expression<Func<FileDN!NotNull , WebImage!NotNull >!NotNull >!NotNull  WebImageFileExpression; //richardtallent
With yours is almost impossible to parse for my eyes.

Due to backwards compatibility maybe we are not that lucky, but your bang syntax is way too much clutter. Everything is not nullable!, you'll put signal/noise close to zero with your syntax.

Of course I don't like putting ! everywhere, I think nullability should exist but shouldn't be default, because most of the time you don't want nulls.

Sometimes you need them, like DateTime? CancellationDate and User? CancellationUser in that case we should annotate them. Pretty much like Swift or Haskell. I think storing them as null is much better than using magic values like DateTime.MinValue or -1.
Aug 6, 2014 at 12:24 PM
Edited Aug 6, 2014 at 12:34 PM
What if the base class Object was modified?
base class Object
{
    overridable static Object Null ()
    {
      return default<this>;
 /* return default value / instance of this class / structure (could be null) */
/* Just like the existing implementation. */
    }
}
When overridden.
base class ObjT
{
   static ObjT  _default = new ObjT;

    overrrides static ObjT Null ()
    {
      return _default; 
/* when null is use for this type it is replaced by the result of this function */
      /* Compiler enforces that this function can not return null if overridden */
    }
}
This allow the class not to have a public default constructor.

So now type! acts as lifted type (like Nullable). The compiler could then implement the static null for the existing classes, with so sensible instance.
string! x ; /* value: String.Empty */
string! x = null; /* value: String.Empty */
Casting a null to a T! would result in the static null being called.
Casting a T! to T would always be valid, since every non-null T is a T.

Other alternate function names
  overridable static Object Default () {...}
  overridable static Object _New () {...}
Aug 6, 2014 at 2:22 PM
AdamSpeight2008 wrote:
What if the base class Object was modified?
I don't think this could work because there is no sensible default value for most of the objects (Person, Window, ComboBoxItem, Connection, HttpRequest....)

There are enough constructs in the language already to create API when a sensible default value exists (overloaded methods and constructors, default arguments, auto-initialized properties...) but this default values, if exist, are different for each particular method.

Your solution will try to hide the problem, not solve it. It will create a forgiving language like JavaScript, where errors happens silently and too late in the stack trace. I prefer a NullReferenceException than having customers with name "" in the database.

What we need is a type system that helps API developers communicate intent. You want a new Person? Then give me a proper name and date of birth (and I'm quite sure is not the 1 of January of 0001 at 00:00)
Aug 6, 2014 at 3:12 PM
Edited Aug 6, 2014 at 3:14 PM
@Olmo I think you slightly misunderstand. If your type want to be (or support) non-nullable it overrides the static function null and return a default instance (of your choosing).
If it doesn't what to (or can't) support non-null don't override it, then null will be passed in , throwing a NullReferenceException.

Since it could be overridden, it can also be check at compile-time. Intellisense / IDE can display what the "default" will be if pass a null.

Non-Null Person
class Person
{
  static _default = New Person( name: String.Empty, dob: default(Date) );
  /* static _default = New Person( name: "**Invalid Name**", dob: default(Date) ); */
  overrides static Person Null ()  /* Is a non-nullable Class object.  */
  {
    return _default ; /*A NullObject for Person Class */
  }
}
Null-able Person
class Person
{
  /* doesn't override static Object Null()  */
  /* So is a Null-able Class */
}
I may be wrong about the compiler and actually meant the BCL / ,net framework.

Some of the proposed idea require a public empty constructor, which I don't think is a good thing to insist on. What if you to create instance only via factory methods?

If the object and an Invalid / Null Object instance you would still need to compare
method ( ObjT ! arg ) 
if arg Is ObjT.NullObjT then throw new ArgumentExeception
So why not do a null check in the first place?
Aug 6, 2014 at 3:32 PM
Edited Aug 6, 2014 at 3:33 PM
AdamSpeight2008 wrote:
So why not do a null check in the first place?
I don't know! Its your idea not mine! I suppose I'm still misundersting you but. Why your default person is any better than null ? I thinks is just wors. A null zombie person that looks like a person but is empty.

I want a method Delete(Person! p) that takes a non-nullable persons, not a null value, neither a zombie default person.

Just try programming in Haskell or Swift and you'll know what I mean.
Aug 6, 2014 at 4:09 PM
Edited Aug 6, 2014 at 4:11 PM
There are two separate problems with null that could use improvement:
  • The desire to ensure that certain references never take a null value by throwing an exception if it occurs (with compile-time checks ideally where the possibility is detected).
  • The desire to ensure that certain references never take a null value by silently swapping in a default value when a null assignment occurs.
Both have legitimate use cases, and both capabilities could be abused.

For the first problem, I'm warming up to Olmo's solution of option symmetric, which would check all reference variables/fields/parameters/setter-values in the current file for nulls. Option Strict is a good analogy. This allows developers to actually transition more easily (add it to the top, compile, and go fix all of the errors by either adding ? where nulls are kosher or fixing the code to ensure nulls don't get assigned -- no need to hunt around manually to find references that need to be addressed and possibly miss a few). And it negates the need for the "!" syntax. I'm not sure the word "symmetric" will be clear to the average developer, but perhaps there's another phrase that would be more intuitive.

For the second, I like:
  • For strings, my StringValue struct approach. It seems pretty easy to make a struct act like a "duck string" with a default instance value of "".
  • For other object types, a similar generic approach that uses default constructors to obtain a default value.
  • For multi-value collections, a new collection type, because we really shouldn't have to deal with initializing lists for Dictionary<T,List<U>>.
  • For method parameters, the =?? syntax to provide an optional values for both unused and null arguments (with the enhanced ability to have an expression there, not just a literal value, which would have to be implemented a different way than optional parameters are implemented now).
I'm still in favor of making code contracts more integrated with the language, so Olmo's birthdate field can also be invalid for dates in the future or the far past as part of the declaration rather than via calls to Contract in the body below. But that should probably be a separate discussion thread if we can the first null issue with a compiler switch.
Aug 6, 2014 at 4:45 PM
Edited Aug 6, 2014 at 4:47 PM
class Person
{
  private String!  _Name;  /* String.Empty is the NonNull default for String!.*/
  private   Date   _DateOfBirth;
  private   Date?  _DateOfDeath;
  private Boolean  _ValidState = false;

  public Person( String! name,
                   Date  DateOfBirth
                   Date? DateOfDeath = Null )
  {
    _Name = Name;
    _DateOfBirth = DateOfBirth;
    _DateOfDeath = DateOfDeath;
    _ValidState = True
  }
  private person ( String! name, Date DateOfBirth, Date? DateOfBirth, Valid )
  {
    _Name = Name;
    _DateOfBirth = DateOfBirth;
    _DateOfDeath = DateOfDeath;
    _ValidState = Valid;
  }

  public String!        Name ()  { get { return _Name;        } }
  public Date    DateOfBirth ()  { get { return _DateOfBirth; } }
  public Date?   DateOfDeath ()  { get { return _DateOfDeath; } }
  public Boolean     IsValid ()  { get { return _ValidState;  } }

  public IsAlive()  { return _DateOfDeath.HasValue == false; } 

  /* Implements Non-Null  Person */
  static person! _defaultPerson = new Person( null,null,null, false );

  overrides static Person Null() { return _defaultPerson; }
 /* ------------ */
}
Example
delete ( person p )
{
  if ( p != null }
  { 
     _delete( p )
   }
  else
  {
   throw new NullReferenceException
   /* RaiseEvent  NullExceptionEvent (  p ); */
  }
}

delete ( person! p )
{ 
  if ( p.IsValid }  
  {
    _delete( p )
  }
}
private _delete ( person! )
{
    _delete( p ) /* do delete */
}
Suppose we didn't have the first overload the method still "correctly" process the _null ( cos the null is replaced by the default instance ) _defaultPersion
Another example is where we return a derived type of the default instance ( NullPerson ), which you handle via overloaded methods.
sealed class NullPerson <: Person { ... }

delete ( NullPerson p ) { }
delete ( Person ) { }
If you interfacing with legacy code (or framework) that doesn't support non-null classes then you'll still require null checks.

What would the default constructor for the above person be?
public class person
{  public person() {    this._Ctor( "", null, null , false ) }
}
You get a new instance of the person for every null, where as with the default instance you shared the same instance. For example like
Drawing.Colors.Black return the same instance of black.

There could also be some form a mechanisms to provide extension default instances the allow you to extend non-null for existing classes.
Aug 6, 2014 at 5:10 PM
Edited Aug 6, 2014 at 5:20 PM
Function VowelCount ( text As String! ) As Integer
  return text.Count( Function( c As Char )  Return "AaEeIiOoUu".Contains( c ) )
End Function
If null was passed into that function it would still return a "sensible" result, since it would use the default instance. Remove the ! and you'll the null reference exception thrown.

If .net is going to support non-null reference types (class) at a fundamental level, then that level has to be at the base of all Objects, which requires Object to be modified. As to make the implementation standard across all of .net. Like when Tuple types were introduced.
It'll both supports the currently nonnullable Structure since it would override the default instance function.
As well as nullable and non nullable Class. Now you non-null integrated at a basic object level ( not the language level )

So to implement a non-null class via the introduction of a new prefix nonnull to the declaration of class.
nonnull class Person 
{
  static Person! Null ()  /* this would then be explicitly be required because of the nonnull prefix to the class declaration. */
  {
  }
}
This inherits from a new baseclass System.NonNullClass which in turn inherit System.Class

Since it is expressed in all objects the type checker ( once modified ) can track non-null. Such that if the object doesn't support non-null a warning ( or error ) is raised in the IDE / compiler.
Aug 6, 2014 at 5:14 PM
Edited Aug 6, 2014 at 5:16 PM
Looking at where the discussion is going: I am completely against the idea having a "default" non-null object for everything. Let me explain why.

What is good with null is that it's an explicitly non-existing object. Any try to access it just crashes as early as possible, and it's good this way. Replacing the honest null with an artificial "default object" doesn't add anything to the semantic. Everywhere where the code checked for null, now it would need to compare with the same default object. On the contrary, we are losing "fail-fast" semantic of null, for no gain.

If some class just cannot have a default value according to its semantics, so be it. Inventing a default person or default color is plainly bad design: there is just no distinguished "default" person.

Non-nullable type must be a compile-time guarantee that the object has a sensible, non-default value, and inventing some special object in order to defeat this semantics is a way to defeat the purpose of the whole feature.
Aug 6, 2014 at 5:30 PM
AdamSpeight2008 wrote:
If null was passed into that function it would still return a "sensible" result, since it would use the default instance. Remove the ! and you'll the null reference exception thrown.
It returns a sensible result from the point of view of VowelCount. But think about this code:
Person p = new Person(); 
int count = VowelCount(p.Name); 
As the client side, VowelCount is returning 0, but this is just hiding the problem (like Javascript or maybe VisualBasic does). The compiler should be saying that you didn't set the name of the person, if not, better an exception than a 0.

So, VowelCount does the right thing returning 0 for "", but camouflaging null as "" is not the solution. The type system should help you telling you to initialize un-initialized values, not invent default values for you.
Aug 6, 2014 at 6:01 PM
Edited Aug 6, 2014 at 6:39 PM
VladD wrote:
Looking at where the discussion is going: I am completely against the idea having a "default" non-null object for everything. Let me explain why.
We already have some form a default instance already .. It just happens to be null.
nullable class -> ( null )
structure -> default instance ( for an Int this is Zero
null structure -> Nullable< T where T is Structure >

What I'm proposing is just extending that so that the class can define an alternative default value.
nonnull class -> default instance aka Nullable< T where T is Class >
What is good with null is that it's an explicitly non-existing object. Any try to access it just crashes as early as possible, and it's good this way. Replacing the honest null with an artificial "default object" doesn't add anything to the semantic.
Yes it does allow the programmer extend the semantic meaning. Suppose IEnumerable<T> xs could default to Enumerable.Empty<T> ?
Everywhere where the code checked for null, now it would need to compare with the same default object. On the contrary, we are losing "fail-fast" semantic of null, for no gain.

No it won't. If you're require non-null instance, why are you check for a non existent object? Just use the normal type and check for null.
If you instance on non-null then, why not allow a default value. (Or extended optional type argument to so you can alter the default null?)


If some class just cannot have a default value according to its semantics, so be it. Inventing a default person or default color is plainly bad design: there is just no distinguished "default" person.
That's incorrect since Color is a Structure and hence has a default color. (It just happens to be Black with an Alpha of zero )
Non-nullable type must be a compile-time guarantee that the object has a sensible, non-default value, and inventing some special object in order to defeat this semantics is a way to defeat the purpose of the whole feature.
It isn't a special object, where do you get that idea? The Null function is checked so is must return an fully instantiated instance. Like for optional parameters.
Aug 6, 2014 at 6:03 PM
Olmo wrote:
As the client side, VowelCount is returning 0, but this is just hiding the problem (like Javascript or maybe VisualBasic does). The compiler should be saying that you didn't set the name of the person, if not, better an exception than a 0.
Why an exception? Why not raise an Event?
Aug 6, 2014 at 6:57 PM
AdamSpeight2008 wrote:
Olmo wrote:
As the client side, VowelCount is returning 0, but this is just hiding the problem (like Javascript or maybe VisualBasic does). The compiler should be saying that you didn't set the name of the person, if not, better an exception than a 0.
Why an exception? Why not raise an Event?
There's people who like to write correct code, and there's people who want to keep the code running no matter what (On Error Resume Next).

This is the C# section, so you're not going to find a lot of support for your idea here.

Maybe it's better that you open a new thread about default object values because even if it's related to this one, it goes in exactly the opposite direction: Non-nullables is about catching errors at compile time, yours is about avoiding NullReferenceException and (presumably) find the error later.
Aug 7, 2014 at 12:28 AM
My proposal has been varying with the feedback in this months. Here is a compilation of my current design: https://gist.github.com/olmobrutall/31d2abafe0b21b017d56

I'll love to hear from potential breaking changes or more holes in the type system that I'm not aware.
Aug 7, 2014 at 6:25 PM
Edited Aug 7, 2014 at 6:26 PM
Olmo wrote:
There's people who like to write correct code, and there's people who want to keep the code running no matter what (On Error Resume Next).
While I agree that auto-coalescing to a default value other than null could be abused, there are situations where it is appropriate. Strings, collections, and optional parameters are three strong examples. Not all cases of these, of course, but there is a solid use case.
Maybe it's better that you open a new thread about default object values
I agree it should probably be a different topic of discussion, and come together only when it's time to implement them (if that ever happens).
This is the C# section, so you're not going to find a lot of support for your idea here.
Some in more snobbish corners of the C# crowd loudly derided var too, because they said it would lead to people getting back types they didn't expect, and that in a strongly-typed language like C#, all types should be explicitly declared by the developer and checked at compilation. In the end, it ended up being a great tool for creating readable code (though I'm sure there are still people out there who hate it and never use it). I think some types of automatic coalescing could fall into the same category.
Aug 7, 2014 at 7:08 PM
Edited Aug 7, 2014 at 7:11 PM
In this particular case, let's make a raw estimation of when this default values will be usefu:
  • Optional parameters/overloads/field initializer usually provide a good default behavior because they are close to the end problem. 80% accuracy, but we already have this.
  • For value types, numbers sometimes get right (50%), Guids don't (0%), DateTimes don't (0%), Colors sometimes (20%), we already have all this anyway.
  • Collection could be happy with an empty collection, like 50% accuracy for List and Dictionaries, and just 5% for arrays.
  • Not many API will be benefited by default strings of "" (10%?). Most of the time the name has to have some meaning or be null.
  • Normal reference types like Person, ComboBox, Windows, HttpContext, etc... 0%
So pretty much this could work for collections, but the idea of the default instance, in order to be efficient, will require that is a shared object. What a nightmare that will be for List<T>!.

If, on the other side, a new List<T> is created every time a List<T> reference is created that will make the GC really busy, for example:
  • You create a Dictionary<string, List<ComboBox>>, let's suppose the Dictionary is created with lie 4 buckets with 4 elements each, 16 lists!.
  • Each of the lists has an initial array of 5 values..
  • Now you have create 80 absolutely worthless zombie ComboBox instances.
So... it's a bad idea: It solves gracefully very few cases, creates important performance problems and delays bug detection.

The type systems are not there to invent random complaints to make you unhappy, actually your machine will be just fine writing random bits in RAM for the rest of the evening.

The type systems are there to help our primate brains build something that works, and initializing variables with sensible values is something important, and easier to keep track for the type system than for us.
Aug 8, 2014 at 7:06 AM
Olmo wrote:

I thought you wanted to move this to another thread? ;)
  • Optional parameters/overloads/field initializer
While there is a way to default when an argument is not provided, there is no convenient way to assign a default value when an argument is provided but is null.

Also, the defaults for optional arguments are currently limited to literals because of the way they are currently implemented (the caller still provides all arguments under the hood). This approach would allow the caller to send nulls, and the method to replace the nulls automatically -- which means the default can be expressions that aren't currently supported, such as constants or enum values.

This is really a different situation than the others (and different means of implementation that is frankly a lot simpler), so maybe it should have its own thread too.
  • Not many API will be benefited by default strings of "" (10%?). Most of the time the name has to have some meaning or be null.
String being nullable is just an anomaly of .NET's underlying design (stack vs. heap etc. etc.), it is no more "right" for a string to be nullable semantically than a Date (and arguably, dates have a stronger case for needing null values).

In the work I do (focused on intranet business applications), optional string fields are very common, and it doesn't make a damned bit of difference whether it is a null or an empty string, though the latter is often more convenient.

In fact, I can't think of a single place in any of my active code base where a string storing a null means anything different semantically than an empty string. I suspect I'm not alone -- otherwise, String.IsNullOrEmpty() would be rarely used rather than being recommended by Microsoft. So I'm gonna go with 90%. YMMV.
  • Collection could be happy with an empty collection, like 50% accuracy for List and Dictionaries, and just 5% for arrays.
I would go higher. As with string, an "empty" collection is for many use cases semantically the same thing as one that has not yet been instantiated. The reasons these are implemented as a reference type in .NET have nothing to do with needing to be nullable. They are essentially arrays on steroids.
  • Normal reference types like Person, ComboBox, Windows, HttpContext, etc... 0%
I disagree. Your focus on user accounts and UI controls just isn't representative of the universe of data manipulated in .NET. But I'm willing to concede it's a much smaller surface area, and where it occurs, it's probably because the type is "collection-like" or "value-like" in purpose -- for example, a group of properties that for one reason or another can't be implemented as structs, but for which nullability is not meaningful or particularly useful.

Besides, this whole notion of "accuracy" is specious -- I'm not suggesting that anything happen in terms of defaulting unless the developer asked for it using an attribute, wrapping it in a struct like Nullable<T> is implemented, or some other explicit way. Just like you want "!" -- as something you explicitly turn on to avoid boilerplate code.
So pretty much this could work for collections, but the idea of the default instance, in order to be efficient, will require that is a shared object.
Not at all. No one is creating new instances that wouldn't have been created before -- they just want to create them implicitly when an attempt is made to access their reference and it is currently set to null. Perhaps what you're saying would be the case for Adam's implementation concept, but there's more than one way to skin the cat. Personally, I think just coalescing to the result of the default constructor is good enough (with System.String modified to have a default constructor that returns an empty string).

Note that I'm not in favor of default values on public or protected Fields, so it should be possible to do all of this without creating hordes of zombies, changing the runtime, etc. It's all syntactic sugar that does a hat trick when the reference is accessed.
If, on the other side, a new List<T> is created every time a List<T> reference is created that will make the GC really busy, for example:
  • You create a Dictionary<string, List<ComboBox>>, let's suppose the Dictionary is created with lie 4 buckets with 4 elements each, 16 lists!.
  • Each of the lists has an initial array of 5 values..
  • Now you have create 80 absolutely worthless zombie ComboBox instances.
I've already mentioned that common composed collections (such as dictionaries that can store multiple values) should be implemented separately, so the semantics make sense. It's silly to have to manage lists within dictionaries as a workaround, all null issues aside.

And with collections in general, there's a lot of distance between the collection being set so it can be seen as empty but not as null, and the elements of the collection having the same behavior.

The primary method I'm promoting (code transformation to add coalescing on assignment) would NOT impact the contents of the list, only what actually goes into your own variable when you pull a null value from the list. This means the "new" value would act like a string does now -- changing it wouldn't update anything about the null that is still sitting in the list. That is a concern of mine with this approach, but it's not insurmountable.
Aug 8, 2014 at 11:32 AM
Aug 8, 2014 at 4:13 PM
richardtallent I'll answer you in the new thread.

Let's refocus, here we're discussing this idea: https://gist.github.com/olmobrutall/31d2abafe0b21b017d56
Aug 10, 2014 at 1:03 AM
How would a collection of nonnull reference types be initialized with a value in each item?
T![] xs = new T![10];

T! Q = xs[ 2 ]; /* What what is the content ? */
Is collection / array initialization an atomic operation?
Coordinator
Aug 10, 2014 at 1:58 AM
AdamSpeight2008 wrote:
How would a collection of nonnull reference types be initialized with a value in each item?
I say: you simply can't initialize a collection with non-null values unless you specify the repeated value to go into them all. This is exactly what Swift does.
var xs = new T![10];  // compile-time error
var ys = new T![10] {initial_value}; // allowed
Aug 10, 2014 at 2:53 AM
Edited Aug 10, 2014 at 2:55 AM
@lwischik

What about this situation in VB.net ?
Dim xs () As T!
...
ReDim xs( 1 )
Will the array xs be in an invalid state?
Aug 10, 2014 at 8:14 AM
I suppose redim could also require an initial value for non-Nullable types.

I'm not sure if I like this idea anyway, because this will be a breaking change for generic collections with an open T that can be any type, Nullable or non Nullable.

My proposal is to fill the array with nulls and throw an exception if you try to read a value that was not assigned before.
Aug 10, 2014 at 9:35 AM
The same time happens with List<T!> as well, when it is re-sized. Internally most of the collection use an array.
I think the contents should be null but indexed item hasn't already been write to a write-back on first read-access should be done. Doesn't require initializing (but you could) with some non-null value.
ReDim Preserve xs( 2 ) 
Would involve at minimum two memory copy operator. First to copy across the original data, the Second to copy across the default value array.
Aug 10, 2014 at 9:55 AM
AdamSpeight2008 wrote:
The same time happens with List<T!> as well, when it is re-sized. Internally most of the collection use an array.
I think the contents should be null but indexed item hasn't already been write to a write-back on first read-access should be done. Doesn't require initializing (but you could) with some non-null value.
That's exactly what I mean, by initializing the internal T![] to just null! but check every access of the array for non-null the List<T> code should work, because I'm quite sure that List<T> doesn't access array positions that haven't been set before.
ReDim Preserve xs( 2 ) 
Would involve at minimum two memory copy operator. First to copy across the original data, the Second to copy across the default value array.
Aug 14, 2014 at 10:43 PM
I discovered this evening this thread and just want to add some thoughts to it.
  1. I also get the impression that there is no perfect solution available. This leads me just to the questions
    a. Are there better solutions available than the current state?
    b. If yes, is the investment (from the Compiler group, BCL, my own developed SW, ...) worth it?
    (I would rate breaking changes as too costly)
  2. I saw in a lot of programs a lot of null reference exceptions
    Therefore I guess that there is a room for improvement.
    Making better would reduce maintaining and debugging efforts and costs.
    I don't need a perfect solution I just need a solution where the investment is smaller than the benefit
  3. I know that there are existing solutions (Code Contracts, ReSharper Annotations).
    (AFAIK they address a bigger area of constraints than just the null reference topic)
    Several people/groups/libraries using such solutions. Therefore I guess that the developers come to the conclusion that it's worth the effort.
  4. I saw a lot of methods where at the beginning a lot of argument null checks are executed (ArgumentNullException checks)
    Therefore I assume that a lot of developers wants to ensure that the method is called with none-null arguments.
    But: This check is invisible to the outer world and is not automatically part of the documentation and ...
If I look to my list I come the following conclusions:
  1. I would add ! to my methods (parameter types and return types) so that...
    a. Argument checks are automatically created
    b. Result checks are automatically (if needed to ensure)
    c. The documentation can be automatically created.

    I like that the idea that the editor/tools can use this information to perform a flow analyses to show me warnings / errors
    (If there are limitations... I will accept them because I look at the thinks I get cheaply and not at the thinks which I don't get)
  2. I would add ! to my local variable like "Person! personNotNull = OldFramework.GetPersonOrNot()" so that
    The compiler would add a null check for me.

    In this case ! is treated as "I want that the compiler ensures that personNotNull is a valid person".

    I would expect that a null check would not be added if the called method is a new style method with a ! at the return type.
  3. I would assume that it should be possible to write a tools where a lot of argument checks in methods can automatically replaced by the new syntax
    I would also assume that it should be possible to write tools where the guaranteed none-null return can be deduced.
    =>There is a chance that this peace of extra information is added across libraries.
    (e.g. via Standalone tools, studio extensions, ReSharper auto formatting, ...)
Until now. we have. just ..
  1. added a shorter syntax to add checks
  2. the same syntax is used to create something like a specialized form of code contracts / annotations
  3. there is a standard how the information is encoded (e.g. attributes, ...)
    Tools, IDE, ... can benefit from it.
=> I like it :)

I suppose that also a project setting would make sense:
"I care about none-nullable reference types - Please check"
If I enable this setting (default is off) I want that the compiler checks that I don't call a method with a potential null reference where the argument must not be null.


Now the symmetry...

Olmo wrote:
I will consider a new mode, option symmetric at the top of the C# file to bring C# in a world where T means non-nullable even for reference types and T? means nullable even for reference types. This option, similar to option strict in Visual Basic, will be new so backwards compatibility won't be an issue, if you want to use it you have to update this file. I hate the idea of having two modes in C#, but it's the one billion $ problem...
I would like a way to tell the compiler what I mean when I just use the type "Person".
The default should be the state of today. But I want to switch it to "Person!".
Also here I assume it should be possible to create a tool which toggles the project setting and the sources (so that nothing is changed by the style).

Therefore I want the possibility to write (assume Person is a reference type):
Person? is treated as a reference type which can be null (like today)
Person! is treated as a reference type which shall never be null.
Person is treated per default as Person? and I can switch it to "Person!)

I would expect that "Person?" has the exact same behavior as "Person" today. Therefore I don't expect the behavior of Nullable of value types.
(Because I respect the history of C# and reference types and I want to have a simple solution for everyone, ...)

I am pretty sure that I miss a lot of aspects but I am also pretty sure that it is possible to improve the "non nullable reference" topic in a way that
effort and outcome are balanced. For me it is worth.

With best regards,
stefan
Aug 15, 2014 at 9:58 AM
Edited Aug 15, 2014 at 10:16 AM
Hi Stefan.

Thank you for spending the time to read this long conversation and write your thoughts.

I think we two think alike about the topic; is worth, is doable, backwards compatibility should be kept.

On the topic of option symmetric I have abandoned the idea in favor of Dictionary!<string, List<string>>. This alternative doesn't split c# in two worlds and is nice enough.

Some open topics in my current solution are:
https://gist.github.com/olmobrutall/31d2abafe0b21b017d56
  • nested (and faked) Nullable types to simplify generic code. You like that?
  • methods returning default(T) will throw exception when T is non-Nullable. I hate this methods but is a breaking change nonetheless (but will appear only as you change to non-nullable). Examples TryGet, SingleOrDefault or FirtsOrDefault. This query will break for example:
Enumerable.Range(10, 0)
.Select(n=>n.ToString()) // inferred to IEnumerable<string!>!
.SingleOrDefault()
The way to fix if will be to change it to a new SingleOrNull but still...
  • should non-Nullable be aggressively inferred? I think so
  • should default(string!) throw exceptions? I think so
Aug 15, 2014 at 3:55 PM
Olmo wrote:
Hi Stefan.

Thank you for spending the time to read this long conversation and write your thoughts.

I think we two think alike about the topic; is worth, is doable, backwards compatibility should be kept.
I got also the Impression.
  • methods returning default(T) will throw exception when T is non-Nullable. I hate this methods but is a breaking change nonetheless (but will appear only as you change to non-nullable). Examples TryGet, SingleOrDefault or FirtsOrDefault. This query will break for example:
```C#
Enumerable.Range(10, 0)
.Select(n=>n.ToString()) // inferred to IEnumerable<string!>!
.SingleOrDefault()
I think a little bit different to this...
  1. I agree that default of an non-nullable reference type shall throw an exception
  2. But SingleOrDefault should not throw an exception..
Let me explain:
  1. The notion of IEnumerable<String!> is that there is an IEnumerable just contains valid (none null) strings.
    And a perfect type system would ensure this.
    But it is still valid that the IEnumerable is empty.
  2. The notion of SingleOfDefault (and all siblings like FirstOrDefault and co) is per definition that there is not always
    a valid object returned (e.g. on a reference type a null can be returned in case there is no valid object.).
    This has nothing to do about the validity properties of the contents it self.

    Therefore the definition of SingleOrDefault related to nullable would be:
    public static TSource SingleOrDefault<TSource>(this IEnumerable<TSource!> source) // returns TSource not TSource! !!!

    All these xxxOrDefault Methods have the meaning:
    Although the input is an IEnumerable of non-nullable the output is always nullable.

    => Therefore these methods have to call default<T> not default<T!>
As I wrote this text I realized an interesting aspect of
public static TSource FirstOrDefault<TSource>(this IEnumerable<TSource> source)

There are two reasons why a default is returned:
  1. The source does not contains any item at all
  2. The first element of the source is a default item.
Just by looking at the result both reasons can not be distinguished.
(I guess that in most real word programs this is not very important)

But If you think about the overloads with a predicate function it is nice to be sure that the source just contains valid objects. :)

If the input is an IEnumerable of T! only the first reason is possible.
``` The way to fix if will be to change it to a new SingleOrNull but still...
  • should non-Nullable be aggressively inferred? I think so
  • should default(string!) throw exceptions? I think so
I would agree to both.

Stefan

PS: I have think about the other points.
Aug 15, 2014 at 6:16 PM
Edited Aug 15, 2014 at 7:04 PM
Stefan75

T! -> T implicit cast allowed
T-> T! explicit cast required.


If T! is to indicate a non-null reference type, ,is that also a guarantee that the value of that T! can never be null,
If it doesn't have guarantee ( eg it doesn't hold true) then as I see it, there is no point in having them. Why not just use the normal T instead. Then check for null?

Think of it as a form of Nullable ,
Nullable.None  == null
Nullable.Some( Value ) == any value that isn't null

If does have that guarantee, then it should be possible to specify some other value if null is used. (See here for more)
or the embedded code-contracts topic where non-null is supported via code-contract.

How explicit cast from T to T!
explicit T! CType( T ObjT )
{
  if ( ObjT is null ) 
  {
    /* Does this reference type support non-null ? */
    if ( typeof( T ) is non_null_enabled ) 
    {
      /* It does support it */
      return T.default()
      /* NOTE: This is different to default(T) which will return null for nullable reference type
         Here T.default() return the non-null instance of a referenece type                       
         See Topic: Overridable Default Values For Null? */
    }
    else
    {
      /* It doesn't support non-null, so all I can do now is throw an exception */
      throw new NullCastException
    }
  }
  else
  {
   /* It isnt null so just return the same instance */
   return ObjT
  }
}
Aug 15, 2014 at 8:54 PM
Is the objective to design something that is compatible with the existing type system, or something that would require a new type system?

If the objective is to work within the existing type system, your objective should be to try to come up with reasonable compile-time diagnostics most of the time, and have the compiler perform run-time checks for cases where proper behavior cannot be statically assured. For example, because a List<String!> instance would be no different from a List<String> instance, there would be no way to prevent a List<String> which contained some null entries from being cast to Object and then to List<String!>. Even though there's no real difference between a List<String> and a List<String!>, though, it could be helpful for a compiler to specify that given a reference of type List<T!>, members of type T will be assumed to be of type T! in the absence of attributes indicating otherwise. Given List<String!> myList; String! myString; it should be possible for code to say myString = myList[3]; and have the compiler-generated code perform a null check and assignment. Something like FirstOrDefault should have an attribute specifying that the result may return its default value regardless of whether the type is supposedly non-nullable.

Providing any sort of real assurance that code which expects a List<String!> won't get a list that contains null entries would require some major changes to the type system. It may be possible to do it without any major breaking changes, but it certainly wouldn't be easy. Among other things, it would create conflict between two existing rules:
  1. A storage location of type X can be guaranteed never to hold a reference to anything that isn't a subtype of X.
  2. If X is a subtype of Y, then a storage location of type X can be copied to one of type Y without an explicit cast.
Since a storage location of type Foo! can hold a reference to a Foo, that would imply that Foo must derive from Foo!. On the other hand, if Foo derives from Foo! then it should be possible to copy a storage location of type Foo to one of type Foo! without an explicit cast. Fundamentally, for non-nullable types to work, they must be implemented in a fashion outside the type hierarchy, and .NET doesn't really have any way to deal with that.
Aug 15, 2014 at 10:33 PM
Edited Aug 15, 2014 at 10:36 PM
To me Null isn't strictly a type but one possible value, that can be represented by the reference subset of types.

So what if T! didn't indicate the type non-null but actually meant the strict subset of T values that excludes the value null?
Eg T! ⊆ T
Inheritance still work logically D <: T D is a T so D! is a T!

T! is a derived-subset type of type T
Aug 16, 2014 at 3:39 PM
Therefore the definition of SingleOrDefault related to nullable would be:
public static TSource SingleOrDefault<TSource>(this IEnumerable<TSource!> source) // returns TSource not TSource!
There is such thing as Option. Its designed for non-nullabke variables that, well, may have or may not have a value. See https://github.com/tejacques/Option/ for .NET implementation.
Aug 18, 2014 at 7:45 AM
Stefan75 wrote:
Therefore the definition of SingleOrDefault related to nullable would be:
 public static TSource SingleOrDefault<TSource>(this IEnumerable<TSource!> source)    // returns TSource not TSource! !!!

All these xxxOrDefault Methods have the meaning:
Although the input is an IEnumerable of non-nullable the output is always nullable.

=> Therefore these methods have to call default<T> not default<T!>
That's exactly what I meant with SingleOrNull.

Currently C# has the concept of default(T). This usually works for reference types, returning null, but is annoying for value types, returning 0 or DateTime.MinValue.

Once you put non-nullable reference types, default(T) becomes even more annoying, returning an invalid value that is going to throw exceptions all the time.

default(T) has been used as a poor-man replacement for a Nullable<T>, because Nullable<T> can not be nested and don't work with reference types.

Now, instead of insisting in the error, default(T) should be avoided and method returning default(T) replaced by methods returning T?.

So, using your example:
public static TSource? SingleOrNull<TSource>(this IEnumerable<TSource> source)
So:
public static int? SingleOrNull<int>(this IEnumerable<int> source)
public static int?? SingleOrNull<int?>(this IEnumerable<int?> source) //double nullable to diferentiate [null] from []

public static string SingleOrNull<string!>(this IEnumerable<int?> source) //string!? is similar to just string
public static string? SingleOrNull<string>(this IEnumerable<int?> source) //a nullable with a reference type that can also be null to diferentiate [null] from []
Good that the names exactly indicates the change, replacing Default by Null
Aug 18, 2014 at 9:27 PM
kekekeks wrote:
There is such thing as Option. Its designed for non-nullabke variables that, well, may have or may not have a value. See https://github.com/tejacques/Option/ for .NET implementation.
It sounds like Option<T> is probably what Nullable<T> should have been (trying to pretend that the default value of a Nullable<T> is "null", rather than simply being a value where HasValue is false, was IMHO an unhelpful hack which, among other things, precluded the type from being fully generic with respect to T). A problem would still remain, however: in the absence of runtime support, I don't think there's any way by which an interface with a covariant type parameter T could include a method that returns an Option<T>. Beyond the fact that only interfaces can support covariance of type parameters would be the difficulty that if class TT:T, then a value of structure type Option<TT> could be converted to a structure of type Option<T> by copying it as though it already was, but a boxed instance of Option<TT> would not be a boxed instance of Option<T>, since code which knows it holds a boxed Option<T> can overwrite its contents with those of another Option<T> [boxed value type instances in .NET are inherently mutable, even though mutation is not particularly easy].
Aug 19, 2014 at 8:37 PM
supercat wrote:
kekekeks wrote:
There is such thing as Option. Its designed for non-nullabke variables that, well, may have or may not have a value. See https://github.com/tejacques/Option/ for .NET implementation.
It sounds like Option<T> is probably what Nullable<T> should have been (trying to pretend that the default value of a Nullable<T> is "null", rather than simply being a value where HasValue is false, was IMHO an unhelpful hack which, among other things, precluded the type from being fully generic with respect to T).
Well, if null didn't exist I couldn't agree more, but once it exists, the more similar Value Type and Reference Types the better. But I agree that Nullable<T> should be nestable and allow reference types.
A problem would still remain, however: in the absence of runtime support, I don't think there's any way by which an interface with a covariant type parameter T could include a method that returns an Option<T>. Beyond the fact that only interfaces can support covariance [...]
Actually, having support for Covariant/Contravariant structs will be necessary once Nullable<T> supports reference types. I will also benefit from it in our Lite<T> (similar to EntityRef<T>), currently we have to expose it as an interfaces (not meant to be implemented) and hide the actual implementation class...
Aug 21, 2014 at 4:54 AM
Olmo wrote:
Well, if null didn't exist I couldn't agree more, but once it exists, the more similar Value Type and Reference Types the better...
While it is useful to be able to have generics that can work with both value types and reference types, trying to ignore the differences between them is not helpful. From an implementation standpoint, the contents of a variable of type Int32 do not derive from Object. The system performs a widening conversion from a value of type Int32 to Object by producing a heap object whose type is Int32, but heap object types and storage location types exist in separate universes. This allows them to use the same Type object to describe them, but has the unfortunate consequence that there's no way in CIL to declare a storage location which, if non-null, will hold a reference to a boxed Int32 [or any other particular boxed value type].

I'm not sure that being able to compare a generic type to null and have such comparisons return true if the object in question is a default(Nullable<T>) is particularly helpful. Having a storage location type for "reference to boxed X" would be more helpful. Alternatively, one could define a couple storage locations for value types which would encapsulate a reference to a boxed instance (one "nullable" and one "not"), and say that when a Nullable<T> is written with a value, the reference must either contain a reference to an instance holding that value or else a reference to a special singleton instance which indicates that the non-boxed value is value. Such a design would help avoid the need to repeatedly box the same values (the first time a value is boxed, a copy of that reference could be stored into the value type, so repeated "boxing" operations could reuse it).
But I agree that Nullable<T> should be nestable and allow reference types.
That would basically preclude the special boxing behavior. If a Nullable<T> boxes as a T when HasValue is true, and as a null when ``HasValueis false, then there's no way to distinguish a boxedNullable<Nullable<int>>which holds a null-valuedNullable<int>` from one whose "outer layer" is null.
Aug 21, 2014 at 7:27 AM
supercat wrote:
Olmo wrote:
Well, if null didn't exist I couldn't agree more, but once it exists, the more similar Value Type and Reference Types the better...
While it is useful to be able to have generics that can work with both value types and reference types, trying to ignore the differences between them is not helpful. From an implementation standpoint, the contents of a variable of type Int32 do not derive from Object. The system performs a widening conversion from a value of type Int32 to Object by producing a heap object whose type is Int32, but heap object types and storage location types exist in separate universes. This allows them to use the same Type object to describe them, but has the unfortunate consequence that there's no way in CIL to declare a storage location which, if non-null, will hold a reference to a boxed Int32 [or any other particular boxed value type].
I'm aware of the differences between value types and reference types, but currently you can write a List<T> that works perfectly for List<int> and List<string> and the CLR compiles different generic instantiations that are perfectly performant. Aditionally, now both reference types and value types have null (ones natively, the other ones with Nullable<T>) and it's annoying and limiting that the generic instantation can not take this differences into account and let's us use T? indistinctly.

Value types are usually small enough to be cheap to copy and don't support inheritance anyway. Why you think having a int^ (a typed boxed value type) will be any useful? To me it looks like a wast of memory and CPU due to the new level of indirection and GC pressure, but I'd like to know your use case.
I'm not sure that being able to compare a generic type to null and have such comparisons return true if the object in question is a default(Nullable<T>) is particularly helpful.
Well, if you take null as no value it makes perfect sense.
Having a storage location type for "reference to boxed X" would be more helpful. Alternatively, one could define a couple storage locations for value types which would encapsulate a reference to a boxed instance (one "nullable" and one "not"), and say that when a Nullable<T> is written with a value, the reference must either contain a reference to an instance holding that value or else a reference to a special singleton instance which indicates that the non-boxed value is value. Such a design would help avoid the need to repeatedly box the same values (the first time a value is boxed, a copy of that reference could be stored into the value type, so repeated "boxing" operations could reuse it).
I still need to know why you find typed boxed value types useful, but caching them in general could be a problem for types with many possible values (long, DateTime, ...) not sure if I've understood you however...
But I agree that Nullable<T> should be nestable and allow reference types.
That would basically preclude the special boxing behavior. If a Nullable<T> boxes as a T when HasValue is true, and as a null when ``HasValueis false, then there's no way to distinguish a boxedNullable<Nullable<int>>which holds a null-valuedNullable<int>` from one whose "outer layer" is null.
I think it's possible if boxing only eats the first Nullable.

Simple nullables:
int? val = 3;
object valObj = val; // boxed int
int? val2 = (int?)valObj; // int? re-generated
int val3 = (int)valObj; // could fail at runtime if valObj where null
And for double nullables:
int?? val = (int?)3; //possibly the cast is necessary
object valObj = val; // boxed int?
int?? val2 = (int??)valObj; // int?? re-generated
int? val3 = (int?)valObj; // could fail at runtime if valObj where null, but not if was a boxed default(int?)
int val3 = (int)valObj; // always cast exception
Aug 22, 2014 at 10:20 PM
Olmo wrote:
Value types are usually small enough to be cheap to copy and don't support inheritance anyway. Why you think having a int^ (a typed boxed value type) will be any useful? To me it looks like a wast of memory and CPU due to the new level of indirection and GC pressure, but I'd like to know your use case.
The default value of a "boxed int" type would be a genuine null reference; using such a type instead of Nullable<T> would eliminate the need to have a structure type whose default value pretends to be a null reference. A boxed integer type could also be useful for situations where something would otherwise have to be boxed multiple times. For example:
int foo = computeSomething();
message1.AppendFormat("The value was {0}", foo);
sqlCommand1.AddParameter(foo);
sqlCommand2.AddParameter(foo);
would produce three separate boxed integers at runtime. One could write the code as:
Object foo = (Object)(int)computeSomething(); // I prefer to make explicitly clear the intention to produce a boxed int
message1.AppendFormat("The value was {0}", foo);
sqlCommand1.AddParameter(foo);
sqlCommand2.AddParameter(foo);
and only generate one boxed integer at runtime, but the type of foo is hardly unknown. I think it would be cleaner to say:
Box<Int32> foo = computeSomething();
message1.AppendFormat("The value was {0}", foo);
sqlCommand1.AddParameter(foo);
sqlCommand2.AddParameter(foo);
I'm not sure that being able to compare a generic type to null and have such comparisons return true if the object in question is a default(Nullable<T>) is particularly helpful.
Well, if you take null as no value it makes perfect sense.
If default-valued Nullable<T> didn't box as null, what situations would require code to test whether a Nullable<T> had a value without knowing it was a Nullable<T>?
I think it's possible if boxing only eats the first Nullable.
I don't think that really works. Given the definitions:
Nullable<Int32> singleNull = default(Nullable<Int32>);
Nullable<Nullable<Int32>> halfNull = singleNull;
Nullable<Nullable<Int32>> doubleNull = default(Nullable<Nullable<Int32>>);
what should be the values of Object.Equals(null, singleNull), Object.Equals(null, halfNull), Object.Equals(null, doubleNull), Object.Equals(singleNull, halfNull), and Object.Equals(halfNull, doubleNull)?

I would posit the following:
  • Object.Equals(halfNull, doubleNull) should be false (the former holds a value of type Nullable<Int32>, while the latter does not)
  • Object.Equals(singleNull, halfNull) should be true (after all, halfNull was set equal to singleNull)
  • The behavior of Object.Equals(null,singleNull) and Object.Equals(null,doubleNull) should match [neither holds a value].
The only way that Object.Equals can implement an equivalence relation subject to the above constraints is for Object.Equals(null,SingleNull) to yield false. The first two constraints imply that doubleNull and singleNull cannot compare equal to each other. It is thus not possible for both to compare equal to null, which in turn implies neither should.
Aug 23, 2014 at 10:40 AM
Sorry will be a sort answer:

The first example with Box<int>, it's more about boxing caching that syntax. Most of the boxings are done in-line not declaring a variable. Don't see a real benefit for now...

About Nullable<Nullable<<int>>:
  • singleNull and doubleNull, when boxed, are just null and will evaluate true comparing between them and with null.
  • halfNull is a different monster, a boxed Nullable<int>, and won't be equal to neither of them.
Object.Equals(singleNull, halfNull) should be true (after all, halfNull was set equal to singleNull)
This is definitely a point but this rule is not always hold:
int a = 2;
long? b = a;
object.Equals(a, b); // returns false
Aug 24, 2014 at 1:44 AM
An Int32 with a value 2 is not the same as an Int64 with value 2, so they shouldn't compare equal. On the other hand, given:
bool WackyTest<T>(T it)
{
  T? it2 = it; // it2.HasValue should unconditionally be true.
  return Object.Equals(it, it2);
}
I would consider it "surprising" to have that method return true for most of the things it could be, but false if it happens to be a Nullable<T> which doesn't hold a value. Personally, I think all this confusion could and should have been avoided by making clear that a MaybeValid<T> is a different animal from a T. Type MaybeValid<T> should include a methods which will accept an Object and return a valid instance if it's a boxed T, return an "invalid" instance if given null, and (depending upon the method) either throw an exception or return an "invalid" instance when given something that is neither.
Aug 24, 2014 at 2:04 AM
I thought of a case using Entity Framework. I would want to make non-nullable navigational properties that are related to not-null foreign keys. But there would be a problem doing this because EF will set them to null in two cases. First, when I set a navigational property, EF sometimes set it to null then sets it to the specified value. I'm not sure why it does this. The second case happens when an entity is deleted. I assume this is done to help the garbage collector. It would still be helpful to get the compiler errors so that my code could not set them to null. But I couldn't have the run-time checks. Besides, EF already throws "Property can't be null" exceptions if I try to set it to null.
And what about the garbage collector? If I set a variable to null in my dispose routine, I can't even declare ReadOnly let alone non-nullable.
Aug 24, 2014 at 8:06 AM
Edited Aug 24, 2014 at 8:07 AM
supercat wrote:
An Int32 with a value 2 is not the same as an Int64 with value 2, so they shouldn't compare equal. On the other hand, given:
bool WackyTest<T>(T it)
{
  T? it2 = it; // it2.HasValue should unconditionally be true.
  return Object.Equals(it, it2);
}
I would consider it "surprising" to have that method return true for most of the things it could be, but false if it happens to be a Nullable<T> which doesn't hold a value.
It's true is surprising. Without a better idea I think is still worth. Is unfortunate that object.Equals is not generic, will fix this problem and avoid nasty boxings.
Personally, I think all this confusion could and should have been avoided by making clear that a MaybeValid<T> is a different animal from a T. Type MaybeValid<T> should include a methods which will accept an Object and return a valid instance if it's a boxed T, return an "invalid" instance if given null, and (depending upon the method) either throw an exception or return an "invalid" instance when given something that is neither.
Not sure why this methods solve the problem, isn't just a Nullable<T> without boxing magic enough?
Aug 27, 2014 at 2:16 PM
Olmo wrote:
Not sure why this methods solve the problem, isn't just a Nullable<T> without boxing magic enough?
Yes, but for the fact that the name Nullable<T> suggests that the default value should have something to do with null.
Aug 27, 2014 at 10:00 PM
Olmo wrote:
Not sure why this methods solve the problem, isn't just a Nullable<T> without boxing magic enough?
One more thing: I suggested having two methods because in some cases one would want behavior equivalent which, if T happened to be a class type, would be equivalent to foo as T and in other cases equivalent to (T)foo. With class types, the former will silently yield null if foo is not a reference to a T, while the latter will throw an exception of foo is a reference to something other than a T. I suppose from a syntactical perspective one could say that foo as (Nullable<T>) would yield the former behavior and (Nullable<T>)foo would have the latter, but to my mind casting syntax is overused. If I had my druthers, (T)(U)someV should generally either be essentially equivalent to (T)someV or be invalid (possibly failing at run-time). Using methods rather than cast syntax would avoid that issue.
Dec 19, 2014 at 9:17 PM
Not sure if this is still under consideration, but if it is I'd like to give some input from the end user perspective (not a compiler writer or language designer).

First I think this should IDEALLY be a compile time only check. While the potentially more precisely located exception would be helpful, in practice it should only happen when dealing with old libraries, which is enough of a hint to find where it is coming from. Doing it as a compile time check leaves the more heavy duty option (Code Contracts) for enforcement if you want. Basically, adding a runtime check doesn't seem to add a lot.

Secondly, I don't think it needs to be applied to variables, just input and return values. That is not an objection to having that as a feature.

Finally, as a developer, what I need to know is whether or not validation is needed. If I write a method with a parameter that is !Reference, then I've served notice that I'm not going to check for null, I'm just going to use it, and I expect the compiler to make that a safe thing to do. If I return a value (either through a property, a out/ref parameter as the return value) and the type is !Reference, then I expect the compiler to prevent me from returning a value that I have not validated (either through only calling methods that are themselves validated or explicitly having set a value or thrown an exception).

Doing this without Attributes that get baked into the meta-data would limit the validation to just the current project, which would be an improvement but definitely suboptimal.

Hopefully this doesn't sound too naïve or ignorant -- I basically love the idea, I just think it should be kept as simple as possible.
Dec 20, 2014 at 11:08 AM
johnmoreno wrote:
Not sure if this is still under consideration, but if it is I'd like to give some input from the end user perspective (not a compiler writer or language designer).
Well, the problem is still there, that's for sure :)
First I think this should IDEALLY be a compile time only check. While the potentially more precisely located exception would be helpful, in practice it should only happen when dealing with old libraries, which is enough of a hint to find where it is coming from. Doing it as a compile time check leaves the more heavy duty option (Code Contracts) for enforcement if you want. Basically, adding a runtime check doesn't seem to add a lot.
Well, old libraries are going to stay there for long, specially if changing to non-nullable takes effort. What is the problem with the compiler generating some runtime checks. Think that for every 'obj.Member' the runtime is already doing a null check anyway so they have to be really fast.
Secondly, I don't think it needs to be applied to variables, just input and return values. That is not an objection to having that as a feature.
I agree that the most pressing issue is non-nullable on public API, but is more orthogonal to be able to use it everywhere (variables, generic parameters, etc..)
Finally, as a developer, what I need to know is whether or not validation is needed. If I write a method with a parameter that is !Reference, then I've served notice that I'm not going to check for null, I'm just going to use it, and I expect the compiler to make that a safe thing to do. If I return a value (either through a property, a out/ref parameter as the return value) and the type is !Reference, then I expect the compiler to prevent me from returning a value that I have not validated (either through only calling methods that are themselves validated or explicitly having set a value or thrown an exception).
Of course, but in order to make this also with old libraries that could be calling your code, you need the runtime checks.
Doing this without Attributes that get baked into the meta-data would limit the validation to just the current project, which would be an improvement but definitely suboptimal.
Agree.
Hopefully this doesn't sound too naïve or ignorant -- I basically love the idea, I just think it should be kept as simple as possible.
Nice that you bring the topic back to the table. Maybe now that C# 6.0 is settle down they are opened to more radical ideas :).

For the new ones, here is the proposal: https://gist.github.com/olmobrutall/31d2abafe0b21b017d56
Dec 21, 2014 at 5:39 AM