Rectangle 27 3

Is there any limit of frequency with Web Audio, or is it the limit of ...

I don't think the WebAudio framework itself limits this. Like the other answers have mentioned here. The limit is probably from microphone's and loudspeaker's physical limits.

I tried to use the my current bookshelf loudspeaker (Kurzweil KS40A) I have along with a decent microphone (Zoom H4). The microphone was about 1 cm from the tweeter.

As you see, with these loudspeakers and microphones aren't able to effeciently generate/capture sounds at those frequencies.

This is more obvious when you look at the Zoom H4's frequency response. Unfortunately I couldn't find a frequency respose for the KS40a.

You can also do something similar using non browser tools to check if you see similar results.

The unit of each element from getByteFrequencyData is a normalized magnitude data of from the FFT scaled to fit the dBFS range of the maxDecibles and minDecibles attributes on the AnalyserNode. So a byte value 0 would imply minDecibles (default is -100dBFS) or lower and a byte value of 255 would imply maxDecibles (default is -30dBFS) or higher.

I tried with Audacity and with freq at 17000Hz, which the speaker is capable to generate since I can hear it, but the FFT result is still near 0. So would it be possible to amplify the audio at certain frequency to make the data from analyser larger?

Yup. So it might be the microphone which can't pick up 17000Hz. Amplification will certainly help. Even our ears here the more extreme (low or high) frequencies better at higher loudness levels. en.wikipedia.org/wiki/Fletcher%E2%80%93Munson_curves

javascript - Web Audio frequency limitation? - Stack Overflow

javascript html5 audio fft web-audio
Rectangle 27 141

This message from Linus himself can help you with some other limits

Which is nice in that you can have a million files, and then only check out a few of them - you'll never even see the impact of the other 999,995 files.

Git fundamentally never really looks at less than the whole repo. Even if you limit things a bit (ie check out just a portion, or have the history go back just a bit), git ends up still always caring about the whole thing, and carrying the knowledge around.

So git scales really badly if you force it to look at everything as one huge repository. I don't think that part is really fixable, although we can probably improve on it.

And yes, then there's the "big file" issues. I really don't know what to do about huge files. We suck at them, I know.

See more in my other answer: the limit with Git is that each repository must represent a "coherent set of files", the "all system" in itself (you can not tag "part of a repository"). If your system is made of autonomous (but inter-dependent) parts, you must use submodules.

As illustrated by Talljoe's answer, the limit can be a system one (large number of files), but if you do understand the nature of Git (about data coherency represented by its SHA-1 keys), you will realize the true "limit" is a usage one: i.e, you should not try to store everything in a Git repository, unless you are prepared to always get or tag everything back. For some large projects, it would make no sense.

For a more in-depth look at git limits, see "git with large files" (which mentions git-lfs: a solution to store large files outside the git repo. GitHub, April 2015)

The three issues that limits a git repo:

  • huge files (the xdelta for packfile is in memory only, which isn't good with large files)
  • huge number of files, which means, one file per blob, and slow git gc to generate one packfile at a time.
  • huge packfiles, with a packfile index inefficient to retrieve data from the (huge) packfile.

Will a few simultaneous clones from the central server also slow down other concurrent operations for other users?

There are no locks in server when cloning, so in theory cloning does not affect other operations. Cloning can use lots of memory though (and a lot of cpu unless you turn on reachability bitmap feature, which you should).

git pull

If we exclude the server side, the size of your tree is the main factor, but your 25k files should be fine (linux has 48k files).

git push

This one is not affected by how deep your repo's history is, or how wide your tree is, so should be quick..

Ah the number of refs may affect both git-push and git-pull. I think Stefan knows better than I in this area.

'git commit'? (It is listed as slow in reference 3.) 'git status'? (Slow again in reference 3 though I don't see it.) (also git-add)

Again, the size of your tree. At your repo's size, I don't think you need to worry about it.

Some operations might not seem to be day-to-day but if they are called frequently by the web front-end to GitLab/Stash/GitHub etc then they can become bottlenecks. (e.g. 'git branch --contains' seems terribly adversely affected by large numbers of branches.)

git-blame could be slow when a file is modified a lot.

I really wonder, with so much of sqlite and many database alternatives available on linux, why they couldn't simply use database which is easy to backup, replicate and scale.

What are the file limits in Git (number and size)? - Stack Overflow

git
Rectangle 27 139

This message from Linus himself can help you with some other limits

Which is nice in that you can have a million files, and then only check out a few of them - you'll never even see the impact of the other 999,995 files.

Git fundamentally never really looks at less than the whole repo. Even if you limit things a bit (ie check out just a portion, or have the history go back just a bit), git ends up still always caring about the whole thing, and carrying the knowledge around.

So git scales really badly if you force it to look at everything as one huge repository. I don't think that part is really fixable, although we can probably improve on it.

And yes, then there's the "big file" issues. I really don't know what to do about huge files. We suck at them, I know.

See more in my other answer: the limit with Git is that each repository must represent a "coherent set of files", the "all system" in itself (you can not tag "part of a repository"). If your system is made of autonomous (but inter-dependent) parts, you must use submodules.

As illustrated by Talljoe's answer, the limit can be a system one (large number of files), but if you do understand the nature of Git (about data coherency represented by its SHA-1 keys), you will realize the true "limit" is a usage one: i.e, you should not try to store everything in a Git repository, unless you are prepared to always get or tag everything back. For some large projects, it would make no sense.

For a more in-depth look at git limits, see "git with large files" (which mentions git-lfs: a solution to store large files outside the git repo. GitHub, April 2015)

The three issues that limits a git repo:

  • huge files (the xdelta for packfile is in memory only, which isn't good with large files)
  • huge number of files, which means, one file per blob, and slow git gc to generate one packfile at a time.
  • huge packfiles, with a packfile index inefficient to retrieve data from the (huge) packfile.

Will a few simultaneous clones from the central server also slow down other concurrent operations for other users?

There are no locks in server when cloning, so in theory cloning does not affect other operations. Cloning can use lots of memory though (and a lot of cpu unless you turn on reachability bitmap feature, which you should).

git pull

If we exclude the server side, the size of your tree is the main factor, but your 25k files should be fine (linux has 48k files).

git push

This one is not affected by how deep your repo's history is, or how wide your tree is, so should be quick..

Ah the number of refs may affect both git-push and git-pull. I think Stefan knows better than I in this area.

'git commit'? (It is listed as slow in reference 3.) 'git status'? (Slow again in reference 3 though I don't see it.) (also git-add)

Again, the size of your tree. At your repo's size, I don't think you need to worry about it.

Some operations might not seem to be day-to-day but if they are called frequently by the web front-end to GitLab/Stash/GitHub etc then they can become bottlenecks. (e.g. 'git branch --contains' seems terribly adversely affected by large numbers of branches.)

git-blame could be slow when a file is modified a lot.

I really wonder, with so much of sqlite and many database alternatives available on linux, why they couldn't simply use database which is easy to backup, replicate and scale.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

What are the file limits in Git (number and size)? - Stack Overflow

git
Rectangle 27 10

How about using strtr() to substitute all of your other delimiters with the first one?

private function multiExplode($delimiters,$string) {
    return explode($delimiters[0],strtr($string,array_combine(array_slice($delimiters,1),array_fill(0,count($delimiters)-1,array_shift($delimiters))))));
}

It's sort of unreadable, I guess, but I tested it as working over here.

One too many commas at the end of that statement.

Php multiple delimiters in explode - Stack Overflow

php explode
Rectangle 27 10

How about using strtr() to substitute all of your other delimiters with the first one?

private function multiExplode($delimiters,$string) {
    return explode($delimiters[0],strtr($string,array_combine(array_slice($delimiters,1),array_fill(0,count($delimiters)-1,array_shift($delimiters))))));
}

It's sort of unreadable, I guess, but I tested it as working over here.

One too many commas at the end of that statement.

Php multiple delimiters in explode - Stack Overflow

php explode
Rectangle 27 27

Isn't there a limit though?

If someone says 'You know, Hitler (yeah, Godwin's law...) had a point - maybe we would be better off without the Jews...'

The notion that I should somehow go 'Hmm, that's an interesting point of view. Have you considered that maybe genocide is a bad thing and we should avoid it?. But of course, you're free to think that, and I totally respect your anti-Semitism as a legitimate point of view...'seems somehow completely wrong...

One site I know declared that promoting the Nazis (or anti-Semitism in general) was trolling.

To play devil's advocate... from a purely academic or scientific point of view, yes this kind of discussion would be fine. But in reality, humans are always the one doing the talking (at least so far), and they have emotions and, importantly, morals (though not always the same morals). A discussion around this premise is decidedly amoral to most nations of the past 65 years. But to be clear, I would almost certainly never entertain such a discussion or premise in chat here and would consider it trolling, per Stephen's comment. I would flag or use RO powers where available to handle it.

From a purely academic or scientific point of view, this "discussion" would not be fine. It's exclusionary and horribly offensive. You would be forcibly removed from any academic or scientific forum if you tried something like this. I can guarantee it because I've seen it happen. Go to a conference and try it yourself if you don't believe me.

I don't think the point is to feign acceptance. I think the point is to have a constructive debate. In a constructive debate even the worst of arguments can be used to build upon and reach an improved conclusion. You can reject an argument politely and state your reasons. Compare "I see how the media may have influenced you to feel that way, but it's a massive stereotype. I disagree with your reasons or that it would solve anything. Racial discrimination and trampling of human rights is never a solution" with "yeah you would you effin Nazi scumbucket wouldn't you, go back to your gas chambers"

The point here isn't that you have to respect anti-Semitism in chat. You can, and should, flag that sort of thing, and quickly steer the conversation away from it. What you shouldn't do is have a big debate about it in chat. Similarly, you should be wary of less obvious but still controversial topics: don't bring them up unless you're really sure everyone's okay with it and capable of reasonable discussion, and if someone brings them up despite that, be ready to steer conversation away and flag for help if necessary.

Just to muddy the waters even more, what about the multiple instances of genocide in the Old Testament? Consider how you might go about having a respectful discussion with a deeply religious Christian.

@VictorStafusa "... would sound completelly sarcastic for me." That's because being nice is found rarely and you rather want to believe in the opposite. Sarcasm and truly nice behavior is indistinguishable.

@StephenLeppik - Just to prove Shog's point to you: there's a lot of people who - with good reason - consider Socialism about as bad as Nazism (I won't bother offering arguments as that's not the point of this comment - but the point is, right or wrong, they have a valid point). If you're allowed to ban promoting Nazism based on your reasons, are those people allowed to also ban promoting Socialism based on theirs? Not so simple now, is it? You can't simply say "this is good politics, this is bad politics" precisely because there are no neat objective lines no matter how you pretend.

@DVK I don't mind not talking about politics but "right or wrong they have a valid point" is not a sensible thing to say about anything.

@DVK Nazism is about hate speech. I would disagree on them being anywhere near as equally harmful as socialism since we have a mixed capitalist/socialist society, as does most of the rest of the world.

@angorsaxon - as I said, "I won't bother offering arguments". This isn't the venue. If you want to know the reasoning behind this, Google. Or ask on Politics.SE if you can fit it into SE-answerable question.

How about "Vaccines cause autism" in an SE forum? Less instinctively problematic than Nazis, but still quite a dangerous bit of misinformation. Just going along with it causes problems for herd immunity and really does encourage diseases directorsblog.nih.gov/2016/03/22/

@DVK - Yes. Of course promoting either Communism in the form of suppressing or murdering civilians in the same way that Nazism suppressed and murdered their own civilians should be banned. And in the context of chat at SE, it is most certainly in violation of the "Be Nice" policy. Your example is rather simple, and I can simply say that when a Government enacts policy to murder its own civilians that is bad politics. Furthermore, if you can locate a Nazi politician, let the local authorities know. Banning would be getting off easy.

discussion chat etiquette
Rectangle 27 56

There is no limit in theory. For HTTP URLs, the HTTP 1.1 specification states:

The HTTP protocol does not place any a priori limit on the length of a URI. Servers MUST be able to handle the URI of any resource they serve, and SHOULD be able to handle URIs of unbounded length if they provide GET-based forms that could generate such URIs. A server SHOULD return 414 (Request-URI Too Long) status if a URI is longer than the server can handle (see section 10.4.15).

But in practice, many clients and servers do only support URLs up to a certain length. The rule of thumb is not to use URLs longer than 2000 characters (percent encoding already taken into account).

Do you know a list of major (often used) components that have this limit?

http - What is the limit on QueryString / GET / URL parameters - Stack...

http query-string
Rectangle 27 56

There is no limit in theory. For HTTP URLs, the HTTP 1.1 specification states:

The HTTP protocol does not place any a priori limit on the length of a URI. Servers MUST be able to handle the URI of any resource they serve, and SHOULD be able to handle URIs of unbounded length if they provide GET-based forms that could generate such URIs. A server SHOULD return 414 (Request-URI Too Long) status if a URI is longer than the server can handle (see section 10.4.15).

But in practice, many clients and servers do only support URLs up to a certain length. The rule of thumb is not to use URLs longer than 2000 characters (percent encoding already taken into account).

Do you know a list of major (often used) components that have this limit?

http - What is the limit on QueryString / GET / URL parameters - Stack...

http query-string
Rectangle 27 55

There is no limit in theory. For HTTP URLs, the HTTP 1.1 specification states:

The HTTP protocol does not place any a priori limit on the length of a URI. Servers MUST be able to handle the URI of any resource they serve, and SHOULD be able to handle URIs of unbounded length if they provide GET-based forms that could generate such URIs. A server SHOULD return 414 (Request-URI Too Long) status if a URI is longer than the server can handle (see section 10.4.15).

But in practice, many clients and servers do only support URLs up to a certain length. The rule of thumb is not to use URLs longer than 2000 characters (percent encoding already taken into account).

Do you know a list of major (often used) components that have this limit?

http - What is the limit on QueryString / GET / URL parameters - Stack...

http query-string
Rectangle 27 1

It is not the delimiter.

Then import/link as DateTime and split this (if you really have to).

How to separate space and semicolon in CSV files import to MS Access? ...

csv ms-access ms-access-2010 delimited
Rectangle 27 8

This is how I limit the results in MS SQL Server 2012

SELECT * 
FROM table1
ORDER BY columnName
  OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY

NOTE: OFFSET can only be used with or in tandem to ORDER BY.

To explain the code line OFFSET xx ROWS FETCH NEXT yy ROW ONLY

The "xx" is the record / row number you want to start pulling from in the table. IE: If there is 40 record in table 1. The code above will start pulling from row 10.

The "yy" is the number of records / rows you want to pull from the table. To build on the previous example. IE: If table 1 has 40 records and you began pulling from row 10 and grab the NEXT set of 10 (yy). That would mean, the code above will pull the records from table 1 starting at row 10 and ending at 20. Thus pulling rows 10 - 20.

mysql - How to implement LIMIT with Microsoft SQL Server? - Stack Over...

sql mysql sql-server migration
Rectangle 27 3

You don't have any delimiters. Delimiters enclose the pattern:

When using the PCRE functions, it is required that the pattern is enclosed by delimiters. A delimiter can be any non-alphanumeric, non-backslash, non-whitespace character.

Typically a "modifier" is a character that sets options for the regular expression (case sensitivity, multiline mode, etc.) and comes after the closing delimiter.

So, this error message is saying that it thinks you are using _ as a modifier, because it seems to be after the pattern. Try enclosing your pattern in the standard delimiter, /, as in /PATTERN_GOES_HERE/

You also need to match something in the capture groups. .* will do (match any amount of anything):

preg_match('/(?<data1>.*)_(?<data2>.*)_(?<data3>.*)_(?<data4>.*)_(?<data5>.*)_(?<data6>.*)/', $str, $matches);

print_r($matches);
Array
(
    [0] => String_Length_Location_Time_Degree_Alt
    [data1] => String
    [1] => String
    [data2] => Length
    [2] => Length
    [data3] => Location
    [3] => Location
    [data4] => Time
    [4] => Time
    [data5] => Degree
    [5] => Degree
    [data6] => Alt
    [6] => Alt
)

Alternately, your case looks like a good candidate for explode, which separates a string into an array splitting the string every time it comes across a delimiter character, which you would specify as the undescore, "_".

This answer is right as far as delimiters go, but it still won't actually capture anything...

preg match - PHP: preg_match issue - Stack Overflow

php preg-match
Rectangle 27 3

You don't have any delimiters. Delimiters enclose the pattern:

When using the PCRE functions, it is required that the pattern is enclosed by delimiters. A delimiter can be any non-alphanumeric, non-backslash, non-whitespace character.

Typically a "modifier" is a character that sets options for the regular expression (case sensitivity, multiline mode, etc.) and comes after the closing delimiter.

So, this error message is saying that it thinks you are using _ as a modifier, because it seems to be after the pattern. Try enclosing your pattern in the standard delimiter, /, as in /PATTERN_GOES_HERE/

You also need to match something in the capture groups. .* will do (match any amount of anything):

preg_match('/(?<data1>.*)_(?<data2>.*)_(?<data3>.*)_(?<data4>.*)_(?<data5>.*)_(?<data6>.*)/', $str, $matches);

print_r($matches);
Array
(
    [0] => String_Length_Location_Time_Degree_Alt
    [data1] => String
    [1] => String
    [data2] => Length
    [2] => Length
    [data3] => Location
    [3] => Location
    [data4] => Time
    [4] => Time
    [data5] => Degree
    [5] => Degree
    [data6] => Alt
    [6] => Alt
)

Alternately, your case looks like a good candidate for explode, which separates a string into an array splitting the string every time it comes across a delimiter character, which you would specify as the undescore, "_".

This answer is right as far as delimiters go, but it still won't actually capture anything...

preg match - PHP: preg_match issue - Stack Overflow

php preg-match
Rectangle 27 12

When you have both the LIMIT and ORDER BY, the optimizer has decided it is faster to limp through the unfiltered records on foo by key descending until it gets five matches for the rest of the criteria. In the other cases, it simply runs the query as a nested loop and returns all the records.

Offhand, I'd say the problem is that PG doesn't grok the joint distribution of the various ids and that's why the plan is so sub-optimal.

For possible solutions: I'll assume that you have run ANALYZE recently. If not, do so. That may explain why your estimated times are high even on the version that returns fast. If the problem persists, perhaps run the ORDER BY as a subselect and slap the LIMIT on in an outer query.

ok... so foos.bars.last results in a full index scan on bars... nice -_-

ok... so this results in a full index scan only if foos have 0 bars... still annoying though

foos.bars.last unless foos.bars.empty?
last
empty

sql - Extremely slow PostgreSQL query with ORDER and LIMIT clauses - S...

sql postgresql query-optimization sql-order-by limit
Rectangle 27 15

No, there is no limit on much space a session may have (or how many variables a session may possess). The only limit is the specs on your computer, this is defined by your available memory_limit in your php.ini . Be aware that this space will be shared among all sessions for all users.

The question is not about "how many" session variables I can have, but how much data each of them can store.

php - Are there limits for session variables? - Stack Overflow

php session-variables
Rectangle 27 15

No, there is no limit on much space a session may have (or how many variables a session may possess). The only limit is the specs on your computer, this is defined by your available memory_limit in your php.ini . Be aware that this space will be shared among all sessions for all users.

The question is not about "how many" session variables I can have, but how much data each of them can store.

php - Are there limits for session variables? - Stack Overflow

php session-variables
Rectangle 27 18

For Rails 5, this is what I had to do to limit log size and don't change server output in the console:

According to the documentation, if you want to limit the size of the log folder, put this in your environment-file ('development.rb'/'production.rb').

With this, your log files will never grow bigger than 50Mb. You can change the size to your own preference. The 1 in the second parameter means that 1 historic log file will be kept, so youll have up to 100Mb of logs the current log and the previous chunk of 50Mb.

First argument is filename, simply speaking, i.e. 'log/development.log'. So I'd prefer longer, but transparent way. Instead of config.paths['log'].firstI'd put Rails.root.join('log', "#{Rails.env}.log")

config.logger = ActiveSupport::Logger.new(config.log_file, 1, 20*1024*1024)

logging - Ruby on Rails production log rotation - Stack Overflow

ruby-on-rails logging production-environment
Rectangle 27 252

This was posted on the Hibernate forum a few years back when asked about why this worked in Hibernate 2 but not in Hibernate 3:

Limit was never a supported clause in HQL. You are meant to use setMaxResults().

So if it worked in Hibernate 2, it seems that was by coincidence, rather than by design. I think this was because the Hibernate 2 HQL parser would replace the bits of the query that it recognised as HQL, and leave the rest as it was, so you could sneak in some native SQL. Hibernate 3, however, has a proper AST HQL Parser, and it's a lot less forgiving.

Query.setMaxResults()

I would argue that Hibernate 3's approach is more correct. Your usage of Hibernate is meant to be database-agnostic, so you should have to do these sorts of things in an abstract manner.

I agree, but it makes migration is royal pain in the ass when features are dropped like that.

but with setMaxResults, first query is run and then on the resultset you call setMaxResults which would take limited number of result rows from resultset and display it to the user, in my case i have 3 million records which are queried and then am calling setMaxResults to set 50 records but i do not want to do that, while query itself i want to query for 50 records, is there a way to do that?

Old post I know. I fully agree with Rachel. Using NHibernate (.Net port of Hibernate), I've recently upgraded from 2 to 3 and same thing, top X is now throwing a parser error. However, when I added setMaxResults on the query, it did generate a TOP X in the resulting SQL (using MsSql2008Dialect). This is good.

@Rachel With setMaxResults hibernate will append the limit part to the query. It will not get all the results. You can check the query it produces by enabling: <property name="show_sql">true</property>

java - How do you do a limit query in HQL? - Stack Overflow

java hibernate hql
Rectangle 27 8

This is not a bug. The limit and offset happen after ordering and it is not deterministic which rows are selected in one case vs another. In general you want to have a tiebreaker so that your ordering is stable and deterministic (I prefer to use unique tiebreakers even when I don't have limit or offset issues in order to ensure the query is the same each time it is run).

If you are doing pagination, add the primary key or surrogate key to the sort as a tiebreaker. That is really the best way.

sql - Strange ordering bug (is it a bug?) in postgres when ordering tw...

sql postgresql sql-order-by limit offset
Rectangle 27 7

You can use toFixed(), but there is a limit of 20.

(4.65661287307739E-10).toFixed(20)
"0.00000000046566128731"
(4.65661287307739E-30).toFixed(20)
"0.00000000000000000000"

So if you always have fewer than 20 decimal places, you'll be fine. Otherwise, I think you may have to write your own.

Parsing and converting exponential values to decimal in JavaScript - S...

javascript