Rectangle 27 28

If you implement the stack using a linked list with a tail pointer, then the worst-case runtime to push, pop, or peek is O(1). However, each element will have some extra overhead associated with it (namely, the pointer) that means that there is always O(n) overhead for the structure. Additionally, depending on the speed of your memory allocator, the cost of allocating new nodes for the stack might be noticeable. Also, if you were to continuously pop off all the elements from the stack, you might have a performance hit from poor locality, since there is no guarantee that the linked list cells will be stored contiguously in memory.

If you implement the stack with a dynamic array, then the amortized runtime to push or pop is O(1) and the worst-case cost of a peek is O(1). This means that if you care about the cost of any single operation in the stack, this may not be the best approach. That said, allocations are infrequent, so the total cost of adding or removing n elements is likely to be faster than the corresponding cost in the linked-list based approach. Additionally, the memory overhead of this approach is usually better than the memory overhead of the linked list. If your dynamic array just stores pointers to the elements, then the memory overhead in the worst-case occurs when half the elements are filled in, in which case there are n extra pointers (the same as in the case when you were using the linked list), and in the best case when the dynamic array is full there are no empty cells and the extra overhead is O(1). If, on the other hand, your dynamic array directly contains the elements, the memory overhead can be worse in the worst-case. Finally, because the elements are stored contiguously, there is better locality if you want to continuously push or pop elements from the stack, since all the elements are right next to each other in memory.

  • The linked-list approach has worst-case O(1) guarantees on each operation; the dynamic array has amortized O(1) guarantees.
  • The locality of the linked list is not as good as the locality of the dynamic array.
  • The total overhead of the dynamic array is likely to be greater than that of the linked list if the elements are stored directly.

Neither of these structures is clearly "better" than the other. It really depends on your use case. The best way to figure out which is faster would be to time both and see which performs better.

Bravo for mentioning locality, I think it is the winning argument in favor of dynamic arrays.

performance - Linked list vs. dynamic array for implementing a stack -...

performance data-structures stack linked-list dynamic-arrays
Rectangle 27 4

I was under the impression that an IENumerable is essentially very similar to a linked list.

I'm not really sure where you got that impression. An object implementing IEnumerable or IEnumerable<T> means "this object exposes an enumerator, which makes it iteratable", it has nothing to do directly to an implementation of a linked list, or an array. They do both share the common feature of being iteratable. It is a binding contract to the caller.

So then why is one (a linked list) being implemented using the other (an array) when we actually argue that they are quite different?

A linked list can have an array as it's storage as an implementation detail, though that would definitely be a poor design choice.

You may note that List<T> also uses an array as it's internal storage, which it re-sizes once it hits the maximum size of the internal array.

Interesting. Thanks. I just wrote an implementation that uses something similar to a LinkedListNode instead of an array but is lacking some things like the constructor with arguments.

c# - Why is an IEnumerable(or IList) implemented using arrays instead ...

c# arrays data-structures ienumerable
Rectangle 27 4

An example: (from the linked page)

This issue could be caused by a session lock. When the long lasting php script uses sessions with session_start(), the process locks the session file on the server till it is finished. Blocking all other PHP processes that are trying to open the same session file.

This is why you see this behaviour in the same browser, but not on another machine or different browser (since the session is different).

session_write_close();

Whenever you do not need to write to the session. You can still read from the session variables when you called this function but for another write to a session variable you need to open the session again using session_start().

You can read a lot of this problem here.

// start the session
session_start();

// I can read/write to session
$_SESSION['latestRequestTime'] = time();

// close the session for writing
session_write_close();

// now do my long-running code.
// still able to read from session, but not write
$twitterId = $_SESSION['twitterId'];

//when you want to write again do session_start() before and close after.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

javascript - No php script can be run on server while there is another...

javascript php ajax apache http
Rectangle 27 3

2) Decode polyline to series of points, using function from this linke...

library(ggmap)
# output = 'all' so we get the polyline of the path along the road
my_route <- route(from = "29.98671,31.21431", 
                  to = "29.97864,31.17557",
                  structure = "route", 
                  output = "all")
my_polyline <- my_route$routes[[1]]$legs[[1]]$steps[[1]]$polyline$points
# DecodeLineR <- function(encoded) {... see linked question ...}
route_points <- DecodeLineR(my_polyline)
new_point <- data.frame(lat=29.987201, lng=31.188547)

ggplot(route_points, aes(x=lng, y=lat)) + 
  geom_point(shape=21, alpha=0.5) +
  geom_point(data = new_point, color = 'red') +
  coord_quickmap() +
  theme_linedraw()
# get each distance in miles (great circle distance in miles)
library(fields)
route_points$distance_to_new <- t(rdist.earth(new_point, route_points))
# it's this one:
route_points[which.min(route_points$distance_to_new), ]

Answer: The 76th point on the poly line is closest, at ~0.19 miles away

lat      lng distance_to_new
76 29.98688 31.18853      0.01903183

r - Getting the point on a road that nearest to a giving point in goog...

r
Rectangle 27 22

Installing numpy with BLAS support using pip

Find out what BLAS library numpy is currently linked against using ldd.

apt-get
...
libblas.so.3 => /usr/lib/libblas.so.3 (0x00007fed81de8000)
...

If _dotblas.so doesn't exist, this probably means that numpy failed to detect any BLAS libraries when it was originally installed, in which case it simply doesn't build any of the BLAS-dependent components. This often happens if you install numpy using pip without manually specifying a BLAS library (see below). I'm afraid you'll have no option but to rebuild numpy if you want to link against an external BLAS library.

_dotblas.so has been removed from recent versions of numpy, but you should be able to check the dependencies of multiarray.so instead:

$ ldd /<path_to_site-packages>/numpy/core/multiarray.so

Install ATLAS/MKL/OpenBLAS if you haven't already. By the way, I would definitely recommend OpenBLAS over ATLAS - take a look at this answer (although the benchmarking data is now probably a bit out of date).

Use update-alternatives to create a symlink to the new BLAS library of your choice. For example, if you installed libopenblas.so into /opt/OpenBLAS/lib, you would do:

$ sudo update-alternatives --install /usr/lib/libblas.so.3 \
                                     libblas.so.3 \
                                     /opt/OpenBLAS/lib/libopenblas.so \
                                     50

You can have multiple symlinks configured for a single target library, allowing you to manually switch between multiple installed BLAS libraries.

$ sudo update-alternatives --config libblas.so.3
Selection    Path                                    Priority   Status
------------------------------------------------------------
  0            /opt/OpenBLAS/lib/libopenblas.so         40        auto mode
  1            /opt/OpenBLAS/lib/libopenblas.so         40        manual mode
  2            /usr/lib/atlas-base/atlas/libblas.so.3   35        manual mode
* 3            /usr/lib/libblas/libblas.so.3            10        manual mode

As @tndoan mentioned in the comments, it's possible to make pip respect a particular configuration for numpy by placing a config file in ~/.numpy-site.cfg - see this answer for more details.

My personal preference is to configure and build numpy by hand. It's not particularly difficult, and it gives you better control over numpy's configuration.

numpy.__path__

For people who still want to use pip with the config file to reinstall, here is the solution for them. @ali_m: would you please edit your post to put this link in your answer. Thanks

About the advice to use openBlas, I found this at scipy.org/scipylib/building/linux.html . As of Jan. 2014 ATLAS is the recommended library to use as OpenBLAS will deadlock when used in combination with the multiprocessing module (or any use of os.fork) and older versions (<= 0.2.8) tend to crash when used on larger problem sizes.

python - Link ATLAS/MKL to an installed Numpy - Stack Overflow

python performance numpy linear-algebra blas
Rectangle 27 9

The approach that you describe is compatible not only with C++, but also with its (mostly) subset language C. Learning to develop a C-style linked-list is a good way to introduce yourself to low-level programming techniques (such as manual memory management), but it generally is not a best-practice for modern C++ development.

Below, I have implemented four variations on how to manage a list of items in C++.

  • raw_pointer_demo uses the same approach as yours -- manual memory management required with the use of raw pointers. The use of C++ here is only for syntactic-sugar, and the approach used is otherwise compatible with the C language.
  • In shared_pointer_demo the list management is still done manually, but the memory management is automatic (doesn't use raw pointers). This is very similar to what you have probably experienced with Java.
  • std_list_demo uses the standard-library list container. This shows how much easier things get if you rely on existing libraries rather than rolling your own.
  • std_vector_demo uses the standard-library vector container. This manages the list storage in a single contiguous memory allocation. In other words, there aren't pointers to individual elements. For certain rather extreme cases, this may become significantly inefficient. For typical cases, however, this is the recommended best practice for list management in C++.

Of note: Of all of these, only the raw_pointer_demo actually requires that the list be explicitly destroyed in order to avoid "leaking" memory. The other three methods would automatically destroy the list and its contents when the container goes out of scope (at the conclusion of the function). The point being: C++ is capable of being very "Java-like" in this regard -- but only if you choose to develop your program using the high-level tools at your disposal.

/*BINFMTCXX: -Wall -Werror -std=c++11
*/

#include <iostream>
#include <algorithm>
#include <string>
#include <list>
#include <vector>
#include <memory>
using std::cerr;
/** Brief   Create a list, show it, then destroy it */
void raw_pointer_demo()
{
    cerr << "\n" << "raw_pointer_demo()..." << "\n";

    struct Node
    {
        Node(int data, Node *next) : data(data), next(next) {}
        int data;
        Node *next;
    };

    Node * items = 0;
    items = new Node(1,items);
    items = new Node(7,items);
    items = new Node(3,items);
    items = new Node(9,items);

    for (Node *i = items; i != 0; i = i->next)
        cerr << (i==items?"":", ") << i->data;
    cerr << "\n";

    // Erase the entire list
    while (items) {
        Node *temp = items;
        items = items->next;
        delete temp;
    }
}
raw_pointer_demo()...
9, 3, 7, 1
/** Brief   Create a list, show it, then destroy it */
void shared_pointer_demo()
{
    cerr << "\n" << "shared_pointer_demo()..." << "\n";

    struct Node; // Forward declaration of 'Node' required for typedef
    typedef std::shared_ptr<Node> Node_reference;

    struct Node
    {
        Node(int data, std::shared_ptr<Node> next ) : data(data), next(next) {}
        int data;
        Node_reference next;
    };

    Node_reference items = 0;
    items.reset( new Node(1,items) );
    items.reset( new Node(7,items) );
    items.reset( new Node(3,items) );
    items.reset( new Node(9,items) );

    for (Node_reference i = items; i != 0; i = i->next)
        cerr << (i==items?"":", ") << i->data;
    cerr<<"\n";

    // Erase the entire list
    while (items)
        items = items->next;
}
shared_pointer_demo()...
9, 3, 7, 1
/** Brief   Show the contents of a standard container */
template< typename C >
void show(std::string const & msg, C const & container)
{
    cerr << msg;
    bool first = true;
    for ( int i : container )
        cerr << (first?" ":", ") << i, first = false;
    cerr<<"\n";
}
/** Brief  Create a list, manipulate it, then destroy it */
void std_list_demo()
{
    cerr << "\n" << "std_list_demo()..." << "\n";

    // Initial list of integers
    std::list<int> items = { 9, 3, 7, 1 };
    show( "A: ", items );

    // Insert '8' before '3'
    items.insert(std::find( items.begin(), items.end(), 3), 8);
    show("B: ", items);

    // Sort the list
    items.sort();
    show( "C: ", items);

    // Erase '7'
    items.erase(std::find(items.begin(), items.end(), 7));
    show("D: ", items);

    // Erase the entire list
    items.clear();
    show("E: ", items);
}
std_list_demo()...
A:  9, 3, 7, 1
B:  9, 8, 3, 7, 1
C:  1, 3, 7, 8, 9
D:  1, 3, 8, 9
E:
/** brief  Create a list, manipulate it, then destroy it */
void std_vector_demo()
{
    cerr << "\n" << "std_vector_demo()..." << "\n";

    // Initial list of integers
    std::vector<int> items = { 9, 3, 7, 1 };
    show( "A: ", items );

    // Insert '8' before '3'
    items.insert(std::find(items.begin(), items.end(), 3), 8);
    show( "B: ", items );

    // Sort the list
    sort(items.begin(), items.end());
    show("C: ", items);

    // Erase '7'
    items.erase( std::find( items.begin(), items.end(), 7 ) );
    show("D: ", items);

    // Erase the entire list
    items.clear();
    show("E: ", items);
}
std_vector_demo()...
A:  9, 3, 7, 1
B:  9, 8, 3, 7, 1
C:  1, 3, 7, 8, 9
D:  1, 3, 8, 9
E:
int main()
{
    raw_pointer_demo();
    shared_pointer_demo();
    std_list_demo();
    std_vector_demo();
}

The Node_reference declaration above addresses one of the most interesting language-level differences between Java and C++. In Java, declaring an object of type Node would implicitly use a reference to a separately allocated object. In C++, you have the choice of reference (pointer) vs. direct (stack) allocation -- so you have to handle the distinction explicitly. In most cases you would use direct allocation, although not for list elements.

c++ - Why do linked lists use pointers instead of storing nodes inside...

c++ pointers linked-list
Rectangle 27 8

There are 2 ways to reference and allocate objects in C++, while in Java there is only one way.

In order to explain this, the following diagrams, show how objects are stored in memory.

Warning: The C++ syntax used in this example, is similar to the syntax in Java. But, the memory allocation is different.

class AddressClass
{
  public:
    int      Code;
    char[50] Street;
    char[10] Number;
    char[50] POBox;
    char[50] City;
    char[50] State;
    char[50] Country;
};

class CustomerClass
{
  public:
    int           Code;
    char[50]      FirstName;
    char[50]      LastName;
    // "Address" IS A pointer !!!
    AddressClass* Address;
};

.......................................
..+-----------------------------+......
..|        AddressClass         +<--+..
..+-----------------------------+...|..
..| [+] int:      Code          |...|..
..| [+] char[50]: Street        |...|..
..| [+] char[10]: Number        |...|..
..| [+] char[50]: POBox         |...|..
..| [+] char[50]: City          |...|..
..| [+] char[50]: State         |...|..
..| [+] char[50]: Country       |...|..
..+-----------------------------+...|..
....................................|..
..+-----------------------------+...|..
..|         CustomerClass       |...|..
..+-----------------------------+...|..
..| [+] int:      Code          |...|..
..| [+] char[50]: FirstName     |...|..
..| [+] char[50]: LastName      |...|..
..| [+] AddressClass*: Address  +---+..
..+-----------------------------+......
.......................................

int main(...)
{
   CustomerClass* MyCustomer = new CustomerClass();
     MyCustomer->Code = 1;
     strcpy(MyCustomer->FirstName, "John");
     strcpy(MyCustomer->LastName, "Doe");

     AddressClass* MyCustomer->Address = new AddressClass();
     MyCustomer->Address->Code = 2;
     strcpy(MyCustomer->Address->Street, "Blue River");
     strcpy(MyCustomer->Address->Number, "2231 A");

     free MyCustomer->Address();
     free MyCustomer();

   return 0;
} // int main (...)

If you check the difference between both ways, you'll see, that in the first technique, the address item is allocated within the customer, while the second way, you have to create each address explictly.

Warning: Java allocates objects in memory like this second technique, but, the syntax is like the first way, which may be confusing to newcomers to "C++".

So your list example could be something similar to the following example.

class Node
{
  public:
   Node(int data);

   int m_data;
   Node *m_next;
};

.......................................
..+-----------------------------+......
..|            Node             |......
..+-----------------------------+......
..| [+] int:           m_data   |......
..| [+] Node*:         m_next   +---+..
..+-----------------------------+...|..
....................................|..
..+-----------------------------+...|..
..|            Node             +<--+..
..+-----------------------------+......
..| [+] int:           m_data   |......
..| [+] Node*:         m_next   +---+..
..+-----------------------------+...|..
....................................|..
..+-----------------------------+...|..
..|            Node             +<--+..
..+-----------------------------+......
..| [+] int:           m_data   |......
..| [+] Node*:         m_next   +---+..
..+-----------------------------+...|..
....................................v..
...................................[X].
.......................................

Since a Linked List has a variable quantity of items, memory is allocated as is required, and, as is available.

Also worth to mention, as @haccks commented in his post.

But, nested items of the same class, cannot be applied with the "no-pointer" technique.

c++ - Why do linked lists use pointers instead of storing nodes inside...

c++ pointers linked-list
Rectangle 27 5

Because this in C++

int main (..)
{
    MyClass myObject;

    // or

    MyClass * myObjectPointer = new MyClass();

    ..
}
public static void main (..)
{
    MyClass myObjectReference = new MyClass();
}

where both of them create a new object of MyClass using the default constructor.

c++ - Why do linked lists use pointers instead of storing nodes inside...

c++ pointers linked-list
Rectangle 27 4

I'll go against everyone else here and say that, yes, the first approach might end up being more efficient. In the second approach, you're allocating memory on the heap O(N) times - N being the number of nodes in the list. If you're using a vector, you're only making O(log N) number of heap allocations.

Also, if you're on a 64 bit machine, the overhead of saving a pointer in each node may be a bit too much if you're dealing with lots of small items. Using a vector, you can use a smaller nextItem - e.g. 32 bit instead of 64, which, if you're making a list to hold 32 bit ints, would be a 1.5 improvement in memory usage.

Another possible optimization is that if you know up-front that you'll be dealing with a lot of elements, you can reserve a big vector and have a single heap allocation for a pretty long time.

I recently took a course on applications of automata and the lecturer is implementing some of the algorithms for pretty large data sets. One of the techniques he told us was exactly your first approach of representing a linked list. I had a course work that I tried implementing both ways (with pointers and with a vector and nextItem kind of thing) and the vector one was acting much better (it did have other optimizations too, but the vector definitely had an effect).

I think what @smilingbuddha is asking about is more like a collection of linked lists - or at least that's what I've used it for. For example, when you save a graph using a list of neighbors. You need a linked list (or array, or whatever) of all the neighbors for each node. So instead of keeping an array of linked lists or a vector of vectors, you just keep of array of indexes pointing to the last inserted neighbor for every node.

c++ - Which is a more efficient implementation of a linked list? - Sta...

c++
Rectangle 27 5

The problem here is that you are referring to a nested dependent type name (i.e. BLink is nested inside RingBuffer which is dependent on a template parameter)

You need to help your compiler a little in this case by stating that RingBuffer<T>::BLink is an actual type name. You do this by using the typename keyword.

template <typename T>
typename RingBuffer<T>::BLink * RingBuffer<T>::NewLink(const T& t)
{
  // ...
}

The compiler cannot know if RingBuffer<T>::BLink is a type name or a static member until the template parameter T is known. When the compiler parses your function template T is not known and the rule to solve the ambiguity is to default to "this is not a type name".

template<typename C>
void print2nd(const C& container)
{
  C::const_iterator * x;
  
}

This maybe illustrates the problem a little better as it's more compact. As already said it's not clear for the parser whether C::const_iterator is a type name or a static data member as it doesn't know what C is when it parses this part of the code (it may know at a later point in time when the template is actually instantiated). So to ease the compiler implementers' lives this ambiguity is resolved to "not a type name" and if the programmer wants to use a type name which is nested inside anything that is dependent on a template parameter he/she has to use the typename keyword in front of the name to let the compiler know that it should be treated as a type name.

Unfortunately there is an exception to that rule regarding nested dependent type names when using them inside a base class list or the base class identifier in a member initialization list.

template<typename T>
struct Base {
  struct Nested {
    Nested(int) {}
  };
};

template<typename T>
struct Derived : public Base<T>::Nested { // typename not allowed here
  Derived(int i)
    : Base<T>::Nested(i) // nor here
  {}
};

Btw: You should set your console client's charset to UTF-8, so you get * instead of *.

thanks for the very good explanation

linked list - Template class, static function compile error c++ - Stac...

c++ linked-list
Rectangle 27 2

On the contrary; using your first method, it is inefficient to remove items from the linked list as you "lose" the slot in the vector where that item was stored and would have to walk the whole list in a garbage-collection style to discover which slots are not being used.

With regard to memory fragmentation, having lots of small allocations is not an issue generally; indeed as a vector is required to be contiguous allocating the memory for it will cause fragmentation as you require larger and larger blocks of contiguous memory. In addition, each time the vector is resized you are causing the copying of large blocks of memory.

In fact, your first answer is arrogating to yourself the job of the memory allocator and memory management unit. The job of the memory allocator is to hand out small chunks of memory; the job of the MMU (among others) is to ensure that pointers between blocks of memory continue to point to the same logical memory even when moved around in physical memory. Your nextitem int members are essentially functioning as pointers. Unless you have very specialised requirements, the hardware, kernel and malloc can do this job far better than you can.

c++ - Which is a more efficient implementation of a linked list? - Sta...

c++
Rectangle 27 34

If you want to avoid caching a non-lazy copy of the entire collection, you could write a simple method that does it using a linked list.

The following method will add each value it finds in the original collection into a linked list, and trim the linked list down to the number of items required. Since it keeps the linked list trimmed to this number of items the entire time through iterating through the collection, it will only keep a copy of at most N items from the original collection.

It does not require you to know the number of items in the original collection, nor iterate over it more than once.

IEnumerable<int> sequence = Enumerable.Range(1, 10000);
IEnumerable<int> last10 = sequence.TakeLast(10);
...
public static class Extensions
{
    public static IEnumerable<T> TakeLast<T>(this IEnumerable<T> collection,
        int n)
    {
        if (collection == null)
            throw new ArgumentNullException("collection");
        if (n < 0)
            throw new ArgumentOutOfRangeException("n", "n must be 0 or greater");

        LinkedList<T> temp = new LinkedList<T>();

        foreach (var value in collection)
        {
            temp.AddLast(value);
            if (temp.Count > n)
                temp.RemoveFirst();
        }

        return temp;
    }
}

Nice, clean and much better performance.

I think its the only solution that doesn't cause the source enumerator to be run through twice (or more), and doesn't force the materialization of the enumeration, so in most applications I would say it would be much more efficient in terms of memory and speed.

c# - Using Linq to get the last N elements of a collection? - Stack Ov...

c# linq
Rectangle 27 14

Brian's answer is the classically correct one. In fact, this is one of the best ways to implement persistent functional queues with amortized constant time. This is so because in functional programming we have a very nice persistent stack (linked list). By using two lists in the way Brian describes, it is possible to implement a fast queue without requiring an obscene amount of copying.

As a minor aside, it is possible to prove that you can do anything with two stacks. This is because a two-stack operation completely fulfills the definition of a universal Turing machine. However, as Forth demonstrates, it isn't always easy. :-)

It was deleted because it wasn't very efficient. It copied elements between the two stacks

@DaveL.it was deleted, but I wish people would see it. If you want to understand more advanced solution, you first should learn inefficient but preferably easier to understand one. It will be abrupt someone to understand quick sort before consuming bubble sort, selection sort or insertion sort.

algorithm - How to implement a queue using two stacks? - Stack Overflo...

algorithm data-structures stack queue
Rectangle 27 14

Brian's answer is the classically correct one. In fact, this is one of the best ways to implement persistent functional queues with amortized constant time. This is so because in functional programming we have a very nice persistent stack (linked list). By using two lists in the way Brian describes, it is possible to implement a fast queue without requiring an obscene amount of copying.

As a minor aside, it is possible to prove that you can do anything with two stacks. This is because a two-stack operation completely fulfills the definition of a universal Turing machine. However, as Forth demonstrates, it isn't always easy. :-)

It was deleted because it wasn't very efficient. It copied elements between the two stacks

@DaveL.it was deleted, but I wish people would see it. If you want to understand more advanced solution, you first should learn inefficient but preferably easier to understand one. It will be abrupt someone to understand quick sort before consuming bubble sort, selection sort or insertion sort.

algorithm - How to implement a queue using two stacks? - Stack Overflo...

algorithm data-structures stack queue
Rectangle 27 9

The solution suggested by @dhsto works but I have found an alternative using linked folders. I have written about it in detail in this article, here is how it can be implemented:

It can be achieved by creating a folder to hold your references, I like to name this _referencesTS, the folder will contain all of the links to files from Test1. This can be done individually but would become very cumbersome if it had to be done for each new TS file. Linking a folder however will link all of the files beneath it, this can be done by editing the csproj file.

To edit the file right click the Test2 project and click Unload Project, then right click the project and click Edit Test2.csproj. Navigate to the <ItemGroup> that contains the <TypeScriptCompile> tags and insert the code below:

<TypeScriptCompile Include="..\Test1\**\*.ts">
   <Link>_referencesTS\%(RecursiveDir)%(FileName)</Link>
</TypeScriptCompile>

Replace the relative path to the location of your TS files in Test1, this uses the wildcarding (*) to link all .ts files (dictated by the *.ts) within all sub folders (dictated by the \**\)..

Note: The only downside to this approach is that when a new file is added to Test1 within a linked folder, the user has to unload and load the project or close and open the solution for it to appear in Test2.

This solution looks excellent! I can see the linked files of my projectB in my projectA explorer tree but I cannot import this references to be used in a class defined in projectA. Any idea?

It seems to work to me either with .ts and .d.ts if those files are really there but not with the link solution you propose here.

This worked excellently. I simply included ..\Source**. since I wanted to create two identical static file projects for nuget. One with the source as content and one with all the static files embedded. Excellent!

In my case the referenced library is in a different namespace. Visual Studio can identify the reference but at compile it fails. I put the two library in the same namespace, it did not help.

visual studio - Cross-project references between two projects - Stack ...

visual-studio reference typescript
Rectangle 27 6

A solution based on linked lists

Apparenty, my first solution has at least two serious flaws: it is dead slow and completely impractical for lists larger than 100 elements, and it contains some bug(s) which I wasn't able to identify yet - it is missing some bands sometimes. So, I will provide two (hopefuly correct) and much more efficient alternatives, and I provide the flawed one below for any one interested.

Here is a solution based on linked lists. It allows us to still use patterns but avoid inefficiencies caused by patterns containing __ or ___ (when repeatedly applied):

ClearAll[toLinkedList];
toLinkedList[x_List] := Fold[{#2, #1} &, {}, Reverse@x]

ClearAll[accumF];
accumF[llFull_List, acc_List, {h_, t_List}, ctr_, max_, min_, band_, rLen_] :=
  With[{cmax = Max[max, h], cmin = Min[min, h]},
     accumF[llFull, {acc, h}, t, ctr + 1, cmax, cmin, band, rLen] /; 
        Abs[cmax - cmin] < band];
accumF[llFull_List, acc_List, ll : {h_, _List}, ctr_, _, _, band_, rLen_] /; ctr >= rLen :=
     accumF[ll, (Sow[acc]; {}), ll, 0, h, h, band, rLen];
accumF[llFull : {h_, t : {_, _List}}, _List, ll : {head_, _List}, _, _, _, band_, rLen_] :=
     accumF[t, {}, t, 0, First@t, First@t, band, rLen];
accumF[llFull_List, acc_List, {}, ctr_, _, _, _, rLen_] /; ctr >= rLen := Sow[acc];

ClearAll[getBandsLL];
getBandsLL[lst_List, runLength_Integer, band_?NumericQ] :=
  Block[{$IterationLimit = Infinity},
     With[{ll = toLinkedList@lst},
        Map[Flatten,
          If[# === {}, #, First@#] &@
            Reap[
              accumF[ll, {}, ll, 0, First@ll, First@ll, band,runLength]
            ][[2]]
        ]
     ]
  ];

Here are examples of use:

In[246]:= getBandsLL[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,1.4,-0.3,-0.1,-0.7},3,1]
Out[246]= {{-0.6,-0.8,-0.1},{-0.3,-0.1,-0.7}}

In[247]:= getBandsLL[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7},3,1]
Out[247]= {{-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7}}

The main idea of the function accumF is to traverse the number list (converted to a linked list prior to that), and accumulate a band in another linked list, which is passed to it as a second argument. Once the band condition fails, the accumulated band is memorized using Sow (if it was long enough), and the process starts over with the remaining part of the linked list. The ctr parameter may not be needed if we would choose to use Depth[acc] instead.

There are a few non-obvious things present in the above code. One subtle point is that an attempt to join the two middle rules for accumF into a single rule (they look very similar) and use CompoundExpression (something like (If[ctr>=rLen, Sow[acc];accumF[...])) on the r.h.s. would lead to a non-tail-recursive accumF (See this answer for a more detailed discussion of this issue. This is also why I make the (Sow[acc]; {}) line inside a function call - to avoid the top-level CompoundExpression on the r.h.s.). Another subtle point is that I have to maintain a copy of the linked list containing the remaining elements right after the last successful match was found, since in the case of unsuccessful sequence I need to roll back to that list minus its first element, and start over. This linked list is stored in the first argument of accumF.

Note that passing large linked lists does not cost much, since what is copied is only a first element (head) and a pointer to the rest (tail). This is the main reason why using linked lists vastly improves performance, as compared to the case of patterns like {___,x__,right___} - because in the latter case, a full sequences x or right are copied. With linked lists, we effectively only copy a few references, and therefore our algorithms behave roughly as we expect (linearly in the length of the data list here). In this answer, I also mentioned the use of linked lists in such cases as one of the techniques to optimize code (section 3.4).

Here is a straightforward but not too elegant function based on Compile, which finds a list of starting and ending bands positions in the list:

bandPositions = 
  Compile[{{lst, _Real, 1}, {runLength, _Integer}, {band, _Real}},
   Module[{i = 1, j, currentMin, currentMax, 
        startEndPos = Table[{0, 0}, {Length[lst]}], ctr = 0},
    For[i = 1, i <= Length[lst], i++,
      currentMin = currentMax = lst[[i]];
      For[j = i + 1, j <= Length[lst], j++,
        If[lst[[j]] < currentMin,
           currentMin = lst[[j]],
           (* else *)
           If[lst[[j]] > currentMax,
             currentMax = lst[[j]]
           ]
        ];
        If[Abs[currentMax - currentMin] >= band ,
          If[ j - i >= runLength,
             startEndPos[[++ctr]] = {i, j - 1}; i = j - 1
          ];
          Break[],
          (* else *)
          If[j == Length[lst] && j - i >= runLength - 1,
              startEndPos[[++ctr]] = {i, j}; i = Length[lst];
              Break[];
          ];
        ]
      ]; (* inner For *)
    ]; (* outer For *)
    Take[startEndPos, ctr]], CompilationTarget -> "C"];
getBandsC[lst_List, runLength_Integer, band_?NumericQ] :=
   Map[Take[lst, #] &, bandPositions[lst, runLength, band]]
In[305]:= getBandsC[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,1.4,-0.3,-0.1,-0.7},3,1]
Out[305]= {{-0.6,-0.8,-0.1},{-0.3,-0.1,-0.7}}

In[306]:= getBandsC[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7},3,1]
Out[306]= {{-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7}}
In[381]:= 
largeTest  = RandomReal[{-5,5},50000];
(res1 =getBandsLL[largeTest,3,1]);//Timing
(res2 =getBandsC[largeTest,3,1]);//Timing
res1==res2

Out[382]= {1.109,Null}
Out[383]= {0.016,Null}
Out[384]= True

Obviously, if one wants performance, Compile wins hands down. My observations for larger lists are that both solutions have approximately linear complexity with the size of the number list (as they should), with compiled one roughly 150 times faster on my machine than the one based on linked lists.

In fact, both methods encode the same algorithm, although this may not be obvious. The one with recursion and patterns is arguably somewhat more understandable, but that is a matter of opinion.

Here is the original code that I wrote first to solve this problem. This is based on a rather straightforward use of patterns and repeated rule application. As mentioned, one disadvantage of this method is its very bad performance. This is actually another case against using constructs like {___,x__,y___} in conjunction with repeated rule application, for anything longer than a few dozens elements. In the mentioned recommendations for code optimization techniques, this corresponds to the section 4.1.

Anyways, here is the code:

If[# === {}, #, First@#] &@
 Reap[thisList //. {
    left___, 
    Longest[x__] /;Length[{x}] >= runLength && Abs[Max[{x}] - Min[{x}]] < band,
    right___} :> (Sow[{x}]; {right})][[2]]

It works correctly for both of the original small test lists. It also looks generally correct, but for larger lists it often misses some bands, which can be seen by comparison with the other two methods. I wasn't so far able to localize the bug, since the code seems pretty transparent.

Leonid, I cannot recall seeing f[...] := With[{...}, f[...] /; ...] before. I frankly didn't think that would work. Is this explained in your book or elsewhere?

@Mr.Wizard No, it is not in the book (the book is very basic :)), but this is a standard semantics of variables shared between the body and the condition, which we discussed many times here on SO (let me know if you need references to past discussions using this). The fact that the r.h.s. contains a call to another f is irrelevant. What is IMO more interesting (or did you mean this one?) is that the call to f wrapped in With still keeps f tail-recursive in Mathematica sense - this must be related to the way scoping constructs are evaluated (contrast this with CompoundExpression).

I guess this is a natural extension of the "Trott-Strzebonski" method, but I still haven't seen it used like that before. I have yet so much to learn.

I meant the simple case. I am still trying to understand tail-recursion in Mathematica.

Finding runs of similar, not identical, elements in Mathematica - Stac...

wolfram-mathematica
Rectangle 27 4

std::vector<std::list<node>> address; //node is class (note that you can pass size here)

// Using your code in your code:
if ((int)address.size() < number)  //number is integer taken as input to function.
{
        address.resize(number);
}
else
{    address.back().push_back(name); //name has type string;

}

Please note that the node is type of the elements you want to push into the vector. As @john said, if you want to keep a list of strings then declare address as:

Also, if you get errors because of >>, either compile this as C++11, or write address as:

std::vector<std::list<std::string> > address;

Presumably since Nikhil is adding strings to the linked list, it would actually be std::vector<std::list<std::string>> collection;

@Nikhil, it would if you declare your type as in my comment.

and for while loop I will write: A.push(std::list<string> temp)//this will add empty list in vector right?[declaring temp as list of string type]

@Nikhil Yes, it will add emtpy lists to the vector. And you can just use back() method on vector to get the last element.

c++ - Create a linked vector and lists - Stack Overflow

c++ list vector stl
Rectangle 27 182

I hate to admit it but I ended up writing my own tree class using a linked list. On an unrelated note I just discovered this round thing which, when attached to a thing I'm calling an 'axle' allows for easier transportation of goods.

Tree data structure in C# - Stack Overflow

c# data-structures
Rectangle 27 11

As delnan said, these problems are with using the wrong data structure, such as a linked list when you want a vector.

ST
  • Use the right data structure for the right problem. Learn what structures Hackage has to offer (vectors, arrays, hashmaps, hashtables, bloomfilters, and more).

To clarify, as an example of imperative optimisation, mappend is the (reverse) bind operator in the list monad. In lisp you can switch to its destructive version, mapcan, and get a x3 boost in execution speed. This is not related to lists being the wrong data type. Append is slow in lisp (but maybe not that slow in haskell? I get decent performance from the haskell list monad, a bit surprisingly).

I'm still not clear on what you're hoping for. If you want destructive updates you have a way to get it (and structures that are mutable, tying in with the notion of using the right data structure). If you just saying 'I suspect this operation is slow because it's slow in Lisp and would pre-emptively like an alternative' then I think you are being too hasty and should investigate the actual performance first.

Performance related to "imperative" algorithms in haskell - Stack Over...

performance haskell functional-programming lisp imperative
Rectangle 27 6

Update: LE, A simple expression evaluator using Lua

An alternative to implementing your own parser and expression evaluator would be to link against a library that provides one for you to use. An interesting choice would be an easily embedded scripting language such as Lua.

It is straightforward to set up a Lua interpreter instance, and pass it expressions to be evaluated, getting back a function to call that evaluates the expression. You can even let the user have variables...

Here is a sketchy implementation of a simple expression evaluator based on a Lua interpreter. I compiled this and tried it for a few cases, but it certainly should not be trusted in production code without some attention to error handling and so forth. All the usual caveats apply here.

I compiled and tested this on Windows using Lua 5.1.4 from Lua for Windows. On other platforms, you'll have to find Lua from your usual source, or from www.lua.org.

Here is the file le.h:

/* Public API for the LE library.
 */
int le_init();
int le_loadexpr(char *expr, char **pmsg);
double le_eval(int cookie, char **pmsg);
void le_unref(int cookie);
void le_setvar(char *name, double value);
double le_getvar(char *name);

Here is the file t-le.c, demonstrating a simple use of this library. It takes its single command-line argument, loads it as an expression, and evaluates it with the global variable x changing from 0.0 to 1.0 in 11 steps:

#include <stdio.h>
#include "le.h"

int main(int argc, char **argv)
{
    int cookie;
    int i;
    char *msg = NULL;

    if (!le_init()) {
    printf("can't init LE\n");
    return 1;
    }
    if (argc<2) {
    printf("Usage: t-le \"expression\"\n");
    return 1;
    }
    cookie = le_loadexpr(argv[1], &msg);
    if (msg) {
    printf("can't load: %s\n", msg);
    free(msg);
    return 1;
    }
    printf("  x    %s\n"
       "------ --------\n", argv[1]);
    for (i=0; i<11; ++i) {
    double x = i/10.;
    double y;

    le_setvar("x",x);
    y = le_eval(cookie, &msg);
    if (msg) {
        printf("can't eval: %s\n", msg);
        free(msg);
        return 1;
    }
    printf("%6.2f %.3f\n", x,y);
    }
}

Here is some output from t-le:

Here is le.c, implementing the Lua Expression evaluator:

#include <lua.h>
#include <lauxlib.h>

#include <stdlib.h>
#include <string.h>

static lua_State *L = NULL;

/* Initialize the LE library by creating a Lua state.
 *
 * The new Lua interpreter state has the "usual" standard libraries
 * open.
 */
int le_init()
{
    L = luaL_newstate();
    if (L) 
    luaL_openlibs(L);
    return !!L;
}

/* Load an expression, returning a cookie that can be used later to
 * select this expression for evaluation by le_eval(). Note that
 * le_unref() must eventually be called to free the expression.
 *
 * The cookie is a lua_ref() reference to a function that evaluates the
 * expression when called. Any variables in the expression are assumed
 * to refer to the global environment, which is _G in the interpreter.
 * A refinement might be to isolate the function envioronment from the
 * globals.
 *
 * The implementation rewrites the expr as "return "..expr so that the
 * anonymous function actually produced by lua_load() looks like:
 *
 *     function() return expr end
 *
 *
 * If there is an error and the pmsg parameter is non-NULL, the char *
 * it points to is filled with an error message. The message is
 * allocated by strdup() so the caller is responsible for freeing the
 * storage.
 * 
 * Returns a valid cookie or the constant LUA_NOREF (-2).
 */
int le_loadexpr(char *expr, char **pmsg)
{
    int err;
    char *buf;

    if (!L) {
    if (pmsg)
        *pmsg = strdup("LE library not initialized");
    return LUA_NOREF;
    }
    buf = malloc(strlen(expr)+8);
    if (!buf) {
    if (pmsg)
        *pmsg = strdup("Insufficient memory");
    return LUA_NOREF;
    }
    strcpy(buf, "return ");
    strcat(buf, expr);
    err = luaL_loadstring(L,buf);
    free(buf);
    if (err) {
    if (pmsg)
        *pmsg = strdup(lua_tostring(L,-1));
    lua_pop(L,1);
    return LUA_NOREF;
    }
    if (pmsg)
    *pmsg = NULL;
    return luaL_ref(L, LUA_REGISTRYINDEX);
}

/* Evaluate the loaded expression.
 * 
 * If there is an error and the pmsg parameter is non-NULL, the char *
 * it points to is filled with an error message. The message is
 * allocated by strdup() so the caller is responsible for freeing the
 * storage.
 * 
 * Returns the result or 0 on error.
 */
double le_eval(int cookie, char **pmsg)
{
    int err;
    double ret;

    if (!L) {
    if (pmsg)
        *pmsg = strdup("LE library not initialized");
    return 0;
    }
    lua_rawgeti(L, LUA_REGISTRYINDEX, cookie);
    err = lua_pcall(L,0,1,0);
    if (err) {
    if (pmsg)
        *pmsg = strdup(lua_tostring(L,-1));
    lua_pop(L,1);
    return 0;
    }
    if (pmsg)
    *pmsg = NULL;
    ret = (double)lua_tonumber(L,-1);
    lua_pop(L,1);
    return ret;
}


/* Free the loaded expression.
 */
void le_unref(int cookie)
{
    if (!L)
    return;
    luaL_unref(L, LUA_REGISTRYINDEX, cookie);    
}

/* Set a variable for use in an expression.
 */
void le_setvar(char *name, double value)
{
    if (!L)
    return;
    lua_pushnumber(L,value);
    lua_setglobal(L,name);
}

/* Retrieve the current value of a variable.
 */
double le_getvar(char *name)
{
    double ret;

    if (!L)
    return 0;
    lua_getglobal(L,name);
    ret = (double)lua_tonumber(L,-1);
    lua_pop(L,1);
    return ret;
}

The above sample consists of 189 lines of code total, including a spattering of comments, blank lines, and the demonstration. Not bad for a quick function evaluator that knows how to evaluate reasonably arbitrary expressions of one variable, and has rich library of standard math functions at its beck and call.

You have a Turing-complete language underneath it all, and it would be an easy extension to allow the user to define complete functions as well as to evaluate simple expressions.

@Imbue TinyExpr looks pretty slick. I'd encourage you to write up the recommendation as a proper answer so you can get reputation for it. It would be a better answer if simple expressions are all you need. And for a number of applications that will certainly be the case. Lua will start to look more interesting if you also want a configuration file, or to allow user-defined functions, or to evaluate algorithms that cannot be written as a single expression. The big surprise with Lua is that all that power still fits in a remarkably small library compared to any other scripting language.

@RBerteig I took your advice and wrote up an answer (perhaps 7 years too late, but maybe it'll help someone). I've used Lua in a couple different projects and I agree it's patently awesome. One of my favorite languages, in fact. I just think it's overkill if you don't actually need/want a full-blown programming language.

c - Evaluating mathematical expressions - Stack Overflow

c parsing