Do we need a discussion on the overall philosophy and design of C#?

Topics: C# Language Design
Apr 7, 2014 at 5:24 PM
The combination of the new powerful/capable Roslyn compiler and bring it out under open source makes me wonder if there shouldn't be a good discussion in general of what the goals are for the C# language. Without such a discussion, I can see an explosion of ideas going off in many different directions (not that this is wrong in itself). If not careful, such an explosion of enhancements can make the language more complicated and confusing.

For example, if the language goes off in a deep way with macro capabilities, how does this affect how you can debug such code. Also languages use reserved words and special characters for syntax. One can easily add new features by 'burning' some of these unused special characters but what is the tradeoff? One can also add new reserved words in a language but that can affect existing application that may all ready use those reserved words as symbols.

I am wondering if there is all ready a design/qualities/philosophy type of document all ready floating around? The reason having such a document is two-fold: Passes on the vision to the community; and creates a measuring stick to be used in evaluating proposed changes to the language.

For example, some of the things that I would put in that list include some of the following:

Succinctness/conciseness: The language tries to express things as simply as possible with out extra stuff. Examples are Properties, Linq, attributes, also the keeping of class definitions and methods together (vs separately as in c++)

Interoperability with the .Net library

Testability and Debug-ability

What do people think?
Apr 7, 2014 at 5:39 PM
it would be great to setup "ideas framework" like each idea must have few questions answered, like:
  • what area new idea covers - less typing, more validations, extra bug protection...?
  • is it possible to achieve same result with current language features?
  • what are alternatives proposed/available today?
  • how much existing code base is affected by change (breaking or not)?
  • how tooling must be updated?
  • ...
It may help to save time and effort
Apr 7, 2014 at 6:09 PM
Edited Apr 7, 2014 at 6:09 PM
I have to say that although I'm a C# fan since 1.0, I am not overjoyed by what I read regarding C# 6.

I hope the language can stay:
  • readable, clear, intuitive;
  • be a general-purpose language, not full of hacks and shortcuts for special use cases;
  • be easily toolable. In particular, refactoring should be possible efficiently and with confidence;
  • And I hope C# will not become a "create your own flavour" language. When I see the code from another team that uses C#, I should immediately be able to understand it, without looking up what some aliases or macros really mean.
There are several proposal in the air that violate those points, I hope C# 6 will turn out great.
Apr 7, 2014 at 6:18 PM
I think a document like this will be much needed, but without taking it as Holy Grail.
  • If this document would have written in C# 2.0, it would have said that C# is fundamentally a OO language, then LINQ happened.
  • If written in C# 3.0, it would have said that C# is fundamentally a static languages, then dynamic happened.
  • If written in C# 4.0, well.. at this time we were already used to changes.
I think all the languages (but COBOL) aspire to be succinct, but one particular characteristic of C# is the ability to reinvent itself, conquering new territories but keeping the language understandable and backwards compatible.

It that sense, I think there are two big things that have to be addressed:

Non-nullable reference types:

The fact that we can not archive a perfect solution is keeping the C# team from making a good enough solution for way too long. This should have been solved in C# 2.0 with generics and nullables, and would have been cached tons of bugs in production.

I've a proposal that is backwards compatible, compatible with other languages and let's you decide how strict you want the compiler to be. I would like to know more opinions about it:


If I could bet, this is going the next big thing in C#. Scala is an enterprise-but-advanced programming languages, just like C#, and is already making good use of it.

Meta-programming, as any feature, can be abused, maybe this one even harder. But I think it could be controlled with some design decisions:
  • Triggered by Attributes: Programmers are already used to associate attributes to reflection magic, this is just another magic trick.
  • From valid C#: Removing the Attributes, the program should compile by itself. I will restrict it to modifications that go from valid C# to valid C#. This way we avoid all the complex interactions that come with adding syntax to the language and could also be simpler for refactorings, etc...
  • Programmed using Roslyn: By composing Roslyn nodes, instead of unsafe string, its much easier to make correct transformations and interpret the possible errors.
An advanced implementation of meta-programming could associate parts of the expanded pattern to the original pattern, allowing some king of debugging, but I think the main use will be to reduce trivial pieces of code. Like custom auto-properties, declaring dependency-properties, ICommand patterns etc..

Some opinions?:
Apr 7, 2014 at 11:34 PM
One thing I would like to see addressed is focusing on ways for programmers to specify when certain compiler transformations should be performed, and when they should not. For example, given readonly Point pt;, the compiler will helpfully turn pt.X into var temp=pt; temp.X; and rather less helpfully turn pt.Offset(1,2); into var temp=pt; temp.Offset(1,2);. Much of the "mutable structs are evil" vibe stems from things like the latter transformation. If the Offset method for Point had an attribute which said "Do not invoke on read-only members", that would allow such methods to be written and used where they would achieve the desired result, while allowing the compiler to squawk when they are used in places that can never work.

Similar situations arise with float and double. There are times when programs use float and double because they want very precise semantics. There are other times when code uses 32-bit floats because even though values would ideally have infinite precision, 32-bit floats are good enough and the higher precision of double wouldn't be worth the storage cost. When code wants precise IEEE semantics, any implicit typecast between floating-point types represents a probable mistake. On the other hand, when code simply wants a low-cost floating-point storage format, requiring explicit casts to float will make mistakes more probable rather than less probable. Even if the Framework only has one 32-bit and one 64-bit floating-point type, a compiler could use attributes distinguish types with looser or stricter conversion rules than the default.