Rectangle 27 23

There are a few optimizations possible. Maybe the Java JIT is performing them and the CLR is not.

(x % a == 0) && (x % b == 0) && ... && (x % z == 0)
(x % lcm(a, b, ... z) == 0)

So in your example the comparison chain could be replaced by

if (i % 232792560 == 0) break;

(but of course if you've already calculated the LCM, there's little point in running the program in the first place!)

if (i % (14549535 * 16)) == 0 break;
if ((i % 16 == 0) && (i % 14549535 == 0)) break;

The first division can be replaced with a mask and compare against zero:

if (((i & 15) == 0) && (i % 14549535 == 0)) break;

The second division can be replaced by a multiplication by the modular inverse:

final long LCM = 14549535;
final long INV_LCM = 8384559098224769503L; // == 14549535**-1 mod 2**64
final long MAX_QUOTIENT = Long.MAX_VALUE / LCM;
// ...
if (((i & 15) == 0) &&
    (0 <= (i>>4) * INV_LCM) &&
    ((i>>4) * INV_LCM < MAX_QUOTIENT)) {
    break;
}

It is somewhat unlikely that the JIT is employing this, but it is not as far-fetched as you might think - some C compilers implement pointer subtraction this way.

@Kevin, in a misguided attempt to roll my own encryption scheme a few years ago :-)

Why is this Java code 6x faster than the identical C# code? - Stack Ov...

java c# execution-time
Rectangle 27 12

clr.dll is the primary binary in the .NET runtime version 4.0 and forward. This dll used to be mscorwks.dll in previous versions of .NET.

thanks man i was thinking to ask a question about this, and found this post.

so the .net 2.0 and 3.5 use mscorwks.dll and upper one use clr.dll?

Correct. The DLL name changed with version 4.0.

c# - What is clr.dll on .Net framework and what does it do? - Stack Ov...

c# .net garbage-collection clr profiler
Rectangle 27 12

Common Language Runtime. It's basically the engine for .NET

c# - What is clr.dll on .Net framework and what does it do? - Stack Ov...

c# .net garbage-collection clr profiler
Rectangle 27 11

Unfortunately, you can't change .Net internal representation of string. My guess is that the CLR is optimized for multibyte strings.

What you are dealing with is the famous paradigm of the Space-time tradeoff, which states that in order to gain memory you'll have to use more processor, or you can save processor by using some memory.

That said, take a look at some considerations here. If I were you, once established that the memory gain will be enough for you, do try to write your own "string" class, which uses ASCII encoding. This will probably suffice.

More on the money, you should check this post, "Of memory and strings", by StackOverflow legend Jon Skeet which deals with the problem you are facing. Sorry I didn't mentioned it right away, it took me some time to find the exact post from Jon.

c# - How to reduce memory footprint on .NET string intensive applicati...

c# .net string utf-8 utf-16
Rectangle 27 2

If it really is a .NET assembly (as some of your later comments, based on info from Spy++, suggest), then it's possibly been obfuscated. Some obfuscators can modify the assembly metadata streams to make them appear to be invalid to both ildasm and Reflector, whereas the CLR can still load them because it does not do as much validation.

reflector - "is not a .NET module." what does it mean? - Stack Overflo...

reflector
Rectangle 27 2

I know Java has VM versions on pretty much any OS but is there a CLR for other OS's besides Windows?

compiler construction - Why are Virtual Machines necessary? - Stack Ov...

compiler-construction vm-implementation
Rectangle 27 5

I don't think there are currently any standalone .net VMs that are self hosting but both Cosmos and SharpOS are .net runtimes written in C#.

It may be possible to reuse some of their runtime code to extra a standalone runtime. Cosmos can be used to host a custom application on boot: http://www.codeproject.com/KB/system/CosmosIntro.aspx

.net - Is there a CLR that runs on the CLR? - Stack Overflow

.net clr cil
Rectangle 27 1

Main should not be public since it is not supposed to be called by any other methods in your assemblies. It is only the CLR that should be able to call it when the application starts. The access specifiers does not apply to the CLR in the same way (if at all) as to your assemblies that are managed by the CLR.

If you want it to be public; but that would mean any other application could reference it without reflection.

That's what the runtime does; just start looking for static methods named "Main" with various signatures and then invokes the first one it finds. Likely something like this:

foreach(Type type in assembly.GetTypes())
{
    foreach(MethodInfo methodInfo in type.GetMethods(BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.Public))
    {
        if (methodInfo.Name == "Main" /*TODO: && methodInfo.Parameters is valid*/)
        {
            methodInfo.Invoke(null, new object[0]);
        }
    }
}

@YasashiiEirian Instead of .exe of hello world.. cant you compile it to dll and use it?

Is there any simple C# code example to invoke Main() with public modif...

c#
Rectangle 27 4

You should check out the IKVM.NET Project. It includes a Java Virtual Machine written in .NET.

I know it's not an actual CLR that runs on top of the CLR, but it's the closest thing I know of that does what you want.

I'm aware of IKVM but yes it may be helpful to have a look at.

.net - Is there a CLR that runs on the CLR? - Stack Overflow

.net clr cil
Rectangle 27 5

There is no structural typing in the CLR, but you seem to have structural typing mixed up with type inference or with the Ruby feature known as "symbols". Structural typing would be if the CLR considered Func<int,bool> and Predicate<int> to be the same type, or at least implicitly convertible.

Speaking of comparison, don't forget Comparer<T>!

@Qwertie I was referring to programming language features like polymorphic variants in OCaml. OCaml's LablGL library has many interesting examples of structural typing being useful in the context of graphics. Nothing to do with type inference and only tangentially related to symbols.

How would one use an immutable-struct IEnumerator?

C# (.NET) Design Flaws - Stack Overflow

c# .net design
Rectangle 27 6

If the object is not reachable then you can call GC.Collect() and the object will be destroyed. The concept of IDisposable has nothing to do with the CLR and is mostly for user code to implement to perform additional disposal logic. Calling Dispose() on an object will not free the object itself from memory, though it may very well dispose any resources that this object references.

I should add that while what I said is a way to achieve this, in 99.9999% of applications you should never call GC.Collect() because it'll often degrade the performance of your application instead of improving it.

calling GC.Collect() isn't something you should do, unless it's for a very good reason.

It's not about references to the object - it's about reachability (which I do mention).

If object a has a reference to object b, and b has a reference to a, then both of a and b are referenced (AKA a circular reference). However, if no one else has a reference to either a or b, then neither is reachable, and the garbage collector will destroy them (eventually).

Time plays no part in GC performance. It's all about how often and how much memory is allocated. The fact that you space the calls minutes apart means nothing - if a normally induced GC would not have occurred then you may be unnecessarily promoting objects into higher generations which affects performance. Particularly in long running applications and services.

@Cecil: YES, it can hurt performance. Imagine forcing a your main thread to stop for a collection even when there's nothing to collect, or forcing a collection when your app is busy. Even worse, because the garbage collector is generational you can force objects to move into a longer living generation that really should not do so.

destructor - Manually destroy C# objects - Stack Overflow

c# destructor idisposable using
Rectangle 27 13

As soon as you enter DoSomething, the CLR sets up space for int x. Why does it not wait until it reaches the line with int x = 5 on it?

The question is not answerable because the entire question is founded on an incorrect premise. The storage space for the local variable may be:

  • allocated when the method is first entered
  • allocated when control reaches the initialization (assuming initialization and declaration are different)
  • allocated under special circumstances -- if for example the local is a closed-over local of a lambda, or in an iterator block, or in an async block, how and when the local storage is allocated can get complicated
  • elided entirely; if the local is never used then it might not be allocated in the first place.

The C# compiler and the jit compiler definitely ensure that the local storage is allocated in a way that is correct, and attempt to ensure that it is efficient. How they choose to do so depends on the exact situation. It might be more efficient to allocate the space up front, and it might be more efficient to allocate it for only as long as the variable is in use; the jitter is permitted broad lattitude in choosing the lifetime of a local variable. Local variables are permitted to live both longer and shorter than their scopes would imply if the jitter can do so without violating program correctness.

Since the premise of the question is incorrect, there is no answer to the question. Ask a better question.

@IgbyLargeman: The hint you get when you hover over the upvote is "this answer is useful", not "this answer answers the question that was asked". If pointing out that a question is predicated on an incorrect premise is useful then there is no contradiction there.

I still think you've contributed to the instability of the fabric of spacetime, if only a little. Don't blame me when a Chesterfield sofa materializes in your office, that's all I'm saying.

@IgbyLargeman: I already have a Chesterfield in my office, actually. (And I note that you have outed yourself as probably being (like me) an English speaking Canadian; very few people in the world call sofas "Chesterfields" outside of English Canada.)

I had a feeling you might say that. I bet your office is sweet. I'm an English speaking Australian (but living one state south of you), but the Chesterfield reference comes to me from Douglas Adams. In Australia we just call it a couch.

.net - Why does C# bind the local variables up-front? - Stack Overflow

c# .net clr
Rectangle 27 35

No - there's no such concept either in C# or in the CLR.

Will this concept make it to C# and/or CLR in the future?

c# - generic NOT constraint where T : !IEnumerable - Stack Overflow

c# .net generics ienumerable
Rectangle 27 12

One reason is there is no CLR support for a readonly local. Readonly is translated into the CLR/CLI initonly opcode. This flag can only be applied to fields and has no meaning for a local. In fact, applying it to a local will likely produce unverifiable code.

This doesn't mean that C# couldn't do this. But it would give two different meanings to the same language construct. The version for locals would have no CLR equivalent mapping.

It actually has nothing to do with CLI support for the feature, because local variables are in no way exposed to other assemblies. The readonly keyword for fields needs to be supported by the CLI because its effect is visible to other assemblies. All it would mean is the variable only has one assignment in the method at compile time.

I think you've just shifted the question to why the CLR does not support this rather than providing the rational behind it. It does allow for const locals, so it would be reasonable to expect readonly locals as well.

An example of this are variable defined in a using statement. They are local ... and readonly (try to assign them, C# will add an error).

-1 In C++ there's no machine code support for const (which in C++ is more like C# readonly than like C# const, although it can play both roles). Yet C++ supports const for local automatic variable. Hence the lack of CLR support for a C# readonly for local variable, is irrelevant.

1. This can easily be a compiler feature, like in C++. CLR support is completely irrelevant. Machine assembly doesn't support it either, so what? 2. (it would) likely produce unverifiable code - I don't see how, but perhaps I am mistaken. 3. it would give two different meanings to the same language construct - I doubt anyone would see this as an issue, since using and out are doing exactly that and the world didn't collapse.

immutability - Why does C# disallow readonly local variables? - Stack ...

c# immutability language-design readonly
Rectangle 27 3

The biggest issue I see is that the CLR runs on a VM, and the VM is useful as a layer of abstraction. Some .NET apps can be run on Linux (see the Mono project, I think they are up to .NET 2 compatibility now), so that would all be gone. In C/C++ or languages that directly talk to the hardware, you have to recompile your code into different binaries for every OS and hardware architecture. The point of having the VM there is to abstract that, so that you can write the code, build it, and use the exact same binary anywhere. If you look at it from a Java perspective, they have done a much better job of using their VM as a "write once run anywhere" model. The same java classes will run on Windows, Mac, and Linux without rebuild (by the programmer anyway, technically the VM is doing that work).

I think the #1 point here is that .NET/CLR is NOT Windows specific, and IMO Microsoft would only help the .NET suite of languages if it put a little more effort toward cross-OS compatibility.

Tell me again why we need both .NET and Windows? Why can't Windows mor...

clr
Rectangle 27 5

The only explanation could be is that the CLR does additional optimisation (correrct me if I am wrong here).

Yes, it is called inlining. It is done in the compiler (machine code level - i.e. JIT). As the getter/setter are trivial (i.e. very simple code) the method calls are destroyed and the getter/setter written in the surrounding code.

This does not happen in debug mode in order to support debugging (i.e. the ability to set a breakpoint in a getter or setter).

In visual studio there is no way to do that in the debugger. Compile release, run without attached debugger and you will get the full optimization.

I do not believe that in real application where those properties being used in much more sophisticated way they will be optimised in the same way.

The world is full of illusions that are wrong. They will be optimized as they are still trivial (i.e. simple code, so they are inlined).

thanks... but please drop the comments on debug. i am not that stupid trying to compare debug builds for performance.. cheers

The are ver valid because when yo uget into more complicated stuff you will find a lot of small thigns behaving different. The GC for examplke deos NOT clean up a lot of stuff fast, too. Keepds references around a lot longer.

c# - Field vs Property. Optimisation of performance - Stack Overflow

c# .net performance c#-4.0 optimization
Rectangle 27 7

I think the main motivation for having AppDomains is that the CLR designers wanted a way of isolating managed code without incurring the performance overhead of multiple Windows processes. Had the CLR been originally implemented on top of UNIX (where creating multiple processes is significantly less expensive), AppDomains may never have been invented.

Also, while managed plug-in architectures in 3rd party apps is definitely a good use of AppDomains, the bigger reason they exist is for well-known hosts like SQL Server 2005 and ASP.NET. For example, an ASP.NET hosting provider can offer a shared hosting solution that supports multiple sites from multiple customers all on the same box running under a single Windows process.

.net - Good example of use of AppDomain - Stack Overflow

.net appdomain
Rectangle 27 2

Although I doubt that this was available in it's current format at the time of the question, here is the CLR Guide on MSDN where it shows all of the Framework versions and the CLR version that they include. I found it very helpful, so I thought I would share.

Next to .NET 4.5 it states that it includes CLR 4.

Will .NET 4.5 introduce a new version of the CLR? - Stack Overflow

.net clr .net-4.5
Rectangle 27 19

Yes, you can precompile using Ngen.exe, however this does not remove the CLR dependence.

You must still ship the IL assemblies as well, the only benefit of Ngen is that your application can start without invoking the JIT, so you get a real fast startup time.

Also, assemblies precompiled using Ngen are usually slower than JIT'ed assemblies because the JIT compiler can optimize to the targets machine (32-bit? 64-bit? Special registers? etc), while NGEN will just produce a baseline compilation.

There is some debate on the above info from CLR Via C#, as some say that you are required to run Ngen on the target machine only as part of the install process.

Why are ngen assemblies slower? ngen is invoked on the target machine, so that should not actually make a difference. Oh, and ngen is not the actual answer to the question.

Ngen is invoked by the developer to precompile the app, not on the target machine. Please read the section on Ngen in CLR Via C#.

Jon, that depends. ngen can be invoked by the installation process on the target machine, and as far as I know this is in fact done for various Microsoft products implemented in .NET.

However, NGen can made into part of the application install process, allowing it to compile to the target machine, I'll edit my post.

EDIT to my comment: you're in fact wrong (and so was I), ngen has to be invoked on the target machine. ngen installs a native image in the machine-local GAC! Precompiling on the developer's machine would be useless.

visual studio - Is it possible to compile .NET IL code to machine code...

.net visual-studio compiler-construction
Rectangle 27 1

I am not aware of one, but ideas frm JVM running on JVM should be helpful.

.net - Is there a CLR that runs on the CLR? - Stack Overflow

.net clr cil