Rectangle 27 41

You are interested in how TCP/IP works!

The other answers are already helpful. I want to point out that your question is a very good one, and want to answer it from the point of view involving listen() and accept(). The behavior of these two system calls should be sufficient to answer your question.

For the core part of the question there really is no difference depending on HTTP or WebSocket: the common ground is TCP over IP and that is sufficient to answer your question. Still, you deserve an answer of how WebSocket relates to TCP (I have tried to elaborate on that a bit more here): sending an HTTP request requires an established TCP/IP connection between two parties. In case of a simple web browser / web server scenario

  • first, a TCP connection is established between both (initiated by the client)
  • then an HTTP request is sent through that TCP connection (from the client to the server)
  • then an HTTP response is sent through the same TCP connection (in the other direction, from the server to the client)

After this exchange, the underlying TCP connection is not needed anymore and usually becomes destroyed/disconnected. In case of an HTTP Upgrade request, the underlying TCP connection just goes on living, and the WebSocket communication goes through the very same TCP connection that was created initially (step (1) above).

As you can see, the only difference between WebSocket and standard HTTP is a switch in a high-level protocol (from HTTP to WebSocket), without changing the underlying transport channel (a TCP/IP connection).

This is a topic I was once struggling with myself and that many do not understand. However, the concept actually is very simple when one understands how the basic socket-related system calls provided by the operating system are working.

First, one needs to appreciate that an IP connection is uniquely defined by five pieces of information:

the protocol (TCP or UDP)

Now, socket objects are often thought to represent a connection. But that is not entirely true. They may represent different things: they can be active or passive. A socket object in passive/listen mode does something very special, and that is important to answer your question. http://linux.die.net/man/2/listen says:

listen() marks the socket referred to by sockfd as a passive socket, that is, as a socket that will be used to accept incoming connection requests using accept(2).

So, we can create a passive socket that listens for incoming connection requests. By definition, such a socket can never represent a connection. It just listens for connection requests.

accept()

The accept() system call is used with connection-based socket types (SOCK_STREAM, SOCK_SEQPACKET). It extracts the first connection request on the queue of pending connections for the listening socket, sockfd, creates a new connected socket, and returns a new file descriptor referring to that socket. The newly created socket is not in the listening state. The original socket sockfd is unaffected by this call.

That is all we need to know in order to answer your question. accept() does not change the state of the passive socket created before. It returns an active (connected) socket (such a socket then represents the five pieces of information states above -- simple, right?). Usually, this newly created active socket object is then handed off to another process or thread or just "entity" that takes care of the connection. After accept() has returned this connected socket object, accept() can be called again on the passive socket, and again and again -- something that is known as accept loop. But calling accept() takes time, right? Can't it miss incoming connection requests? There is more essential information in the just-quoted help text: there is a queue of pending connection requests! It is handled automatically by the TCP/IP stack of your operating system. That means that while accept() can only deal with incoming connection requests one-by-one, no incoming request will be missed even when they are incoming at a high rate or (quasi-)simultaneously. One could say that the behavior of accept() is rate-limiting the frequency of incoming connection requests your machine can handle. However, this is a fast system call and in practice, other limitations hit in first -- usually those related to handling all the connections that have been accepted so far.

your answer is the best one so far, and very instructive too! This is actually more a TCP related question than a HTTP/Websocket related one. The handover performed upon receiving the HTTP UPGRADE request doesn't have anything to do with how HTTP and/or Websockets services can serve multiple connections on the same TCP port.

@Jan-PhilipGehrcke I learned a lot from your post and really appreciate your detailed explanation. But I can only mark one answer. Sorry for that. Add 1 instead. :)

@Jan-PhilipGehrcke, Today I re-read your answer. I have a impression that the server-client distinction only exists before the establishment of connection, once the accept() is called and another server port is granted, the client and server are merely equal peers talking to each other over an active TCP connection.

http - How WebSocket server handles multiple incoming connection reque...

http websocket spring-websocket
Rectangle 27 41

You are interested in how TCP/IP works!

The other answers are already helpful. I want to point out that your question is a very good one, and want to answer it from the point of view involving listen() and accept(). The behavior of these two system calls should be sufficient to answer your question.

For the core part of the question there really is no difference depending on HTTP or WebSocket: the common ground is TCP over IP and that is sufficient to answer your question. Still, you deserve an answer of how WebSocket relates to TCP (I have tried to elaborate on that a bit more here): sending an HTTP request requires an established TCP/IP connection between two parties. In case of a simple web browser / web server scenario

  • first, a TCP connection is established between both (initiated by the client)
  • then an HTTP request is sent through that TCP connection (from the client to the server)
  • then an HTTP response is sent through the same TCP connection (in the other direction, from the server to the client)

After this exchange, the underlying TCP connection is not needed anymore and usually becomes destroyed/disconnected. In case of an HTTP Upgrade request, the underlying TCP connection just goes on living, and the WebSocket communication goes through the very same TCP connection that was created initially (step (1) above).

As you can see, the only difference between WebSocket and standard HTTP is a switch in a high-level protocol (from HTTP to WebSocket), without changing the underlying transport channel (a TCP/IP connection).

This is a topic I was once struggling with myself and that many do not understand. However, the concept actually is very simple when one understands how the basic socket-related system calls provided by the operating system are working.

First, one needs to appreciate that an IP connection is uniquely defined by five pieces of information:

the protocol (TCP or UDP)

Now, socket objects are often thought to represent a connection. But that is not entirely true. They may represent different things: they can be active or passive. A socket object in passive/listen mode does something very special, and that is important to answer your question. http://linux.die.net/man/2/listen says:

listen() marks the socket referred to by sockfd as a passive socket, that is, as a socket that will be used to accept incoming connection requests using accept(2).

So, we can create a passive socket that listens for incoming connection requests. By definition, such a socket can never represent a connection. It just listens for connection requests.

accept()

The accept() system call is used with connection-based socket types (SOCK_STREAM, SOCK_SEQPACKET). It extracts the first connection request on the queue of pending connections for the listening socket, sockfd, creates a new connected socket, and returns a new file descriptor referring to that socket. The newly created socket is not in the listening state. The original socket sockfd is unaffected by this call.

That is all we need to know in order to answer your question. accept() does not change the state of the passive socket created before. It returns an active (connected) socket (such a socket then represents the five pieces of information states above -- simple, right?). Usually, this newly created active socket object is then handed off to another process or thread or just "entity" that takes care of the connection. After accept() has returned this connected socket object, accept() can be called again on the passive socket, and again and again -- something that is known as accept loop. But calling accept() takes time, right? Can't it miss incoming connection requests? There is more essential information in the just-quoted help text: there is a queue of pending connection requests! It is handled automatically by the TCP/IP stack of your operating system. That means that while accept() can only deal with incoming connection requests one-by-one, no incoming request will be missed even when they are incoming at a high rate or (quasi-)simultaneously. One could say that the behavior of accept() is rate-limiting the frequency of incoming connection requests your machine can handle. However, this is a fast system call and in practice, other limitations hit in first -- usually those related to handling all the connections that have been accepted so far.

your answer is the best one so far, and very instructive too! This is actually more a TCP related question than a HTTP/Websocket related one. The handover performed upon receiving the HTTP UPGRADE request doesn't have anything to do with how HTTP and/or Websockets services can serve multiple connections on the same TCP port.

@Jan-PhilipGehrcke I learned a lot from your post and really appreciate your detailed explanation. But I can only mark one answer. Sorry for that. Add 1 instead. :)

@Jan-PhilipGehrcke, Today I re-read your answer. I have a impression that the server-client distinction only exists before the establishment of connection, once the accept() is called and another server port is granted, the client and server are merely equal peers talking to each other over an active TCP connection.

http - How WebSocket server handles multiple incoming connection reque...

http websocket spring-websocket
Rectangle 27 5

LDAP sits on top of the TCP/IP stack and controls internet directory access. It is environment agnostic.

AD & ADSI is a COM wrapper around the LDAP layer, and is Windows specific.

You can see Microsoft's explanation here.

There's a problem in Microsoft's explanation. Quote: Microsoft provides the Active Directory Service Interfaces (ADSI) for developing client-side directory service applications. ADSI consists of a directory service model and a set of COM interfaces. These interfaces enable development of network directory service access applications. ADSI uses an LDAP provider to communicate with Active Directory. ADSI can also access Novell NetWare Directory Services. ADSI can communicate with various directory services by using their native providers. --------- NetWare as opposed to AD or to LDAP?

AD is a server. ADSI is a COM wrapper. NDS is a product and it uses LDAP. @jwilleke

What are the differences between LDAP and Active Directory? - Stack Ov...

active-directory ldap
Rectangle 27 5

Aye! Don't give up on your stream yet Jbu. We are talking Serial communication here. For serial stuff, it is absolutely expected that a -1 can/will be returned on reads, yet still expect data at a later time. The problem is that most people are used to dealing with TCP/IP which should always return a 0 unless the TCP/IP disconnected... then yea, -1 makes sense. However, with Serial there is no data flow for extended periods of time, and no "HTTP Keep Alive", or TCP/IP heartbeat, or (in most cases) no hardware flow control. But the link is physical, and still connected by "copper" and still perfectly live.

Now, if what they are saying is correct, ie: Serial should be closed on a -1, then why do we have to watch for stuff like OnCTS, pmCarroerDetect, onDSR, onRingIndicator, etc... Heck, if 0 means its there, and -1 means its not, then screw all those detection functions! :-)

Q: "It seemed like only the tail end of the second event's data would be displayed and the the rest was missing."

A: I'm going to guess that you were in a loop, re-using the same byte[] buffer. 1st message comes in, is not displayed on the screen/log/std out yet (because you are in the loop), then you read the 2nd message, replacing the 1st message data in the buffer. Again, because I'm going to guess that you don't store how much you read, and then made sure to offset your store buffer by the previous read amount.

Q:"I eventually changed my code so that when I get an event I'd called if(inputStream.available() > 0) while((aByte = read()) > -1) store the byte."

A: Bravo... thats the good stuff there. Now, you data buffer is inside an IF statement, your 2nd message will not clobber your 1st... well, actually, it was probably just one big(er) message in the 1st place. But now, you will read it all in one shot, keeping the data intact.

A: Ahhh, the good ol' catch all scape goat! The race condition... :-) Yes, this may have been a race condition, in fact it may have well been. But, it could also just be the way the RXTX clears the flag. The clearing of the 'data available flag' may not happen as quick as one expects. For example, anyone know the difference between read VS readLine in relation to clearing the buffer the data was previously stored in and re-setting the event flag? Neither do I. :-) Nor can I find the answer yet... but... let me ramble on for a few sentences more. Event driven programming still has some flaws. Let me give you a real world example I had to deal with recently.

  • The TCP/IP however, looks to notify me, oh, sees that the flag is still SET, and will not notify me again.
  • ... and the last 10 bytes remain in the TCP/IP Q... because I was not notified of them.

See, the notification was missed because the flag was still set... even though I had begun reading the bytes. Had I finished the bytes, then the flag would have been cleared, and I would have received notification for the next 10 bytes.

The exact opposite of what is happening for you now.

So yea, go with an IF available() ... do a read of the returned length of data. Then, if you are paranoid, set a timer and call available() again, if there is still data there, then do a read no the new data. If available() returns 0 (or -1), then relax... sit back... and wait for the next OnEvent notification.

Wrong. You're violating the InputStream contract. By definition, reaching the end of the stream means there is no more data to come. You can't return more data once you've reached the end. That's just plain English.

Java InputStream blocking read - Stack Overflow

java blocking inputstream rxtx java-io
Rectangle 27 6

Raw mode is basically there to allow you to bypass some of the way that your computer handles TCP/IP. Rather than going through the normal layers of encapsulation/decapsulation that the TCP/IP stack on the kernel does, you just pass the packet to the application that needs it. No TCP/IP processing -- so it's not a processed packet, it's a raw packet. The application that's using the packet is now responsible for stripping off the headers, analyzing the packet, all the stuff that the TCP/IP stack in the kernel normally does for you

you can fine a good example Click here

and a tutorial Click here

unix - 'SOCK_RAW' option in 'socket' system call - Stack Overflow

sockets unix networking
Rectangle 27 4

It should also be noted that a TCP/IP connection is effectively a pipe in the sense that you shove bytes in one end and they come out the other. TCP/IP connections are available on pretty much any platform you are likely to care about.

It should be noted because it's usually the correct way to do this in a non-Windows-specific way.

c++ - Can pipes be used across LAN computers? - Stack Overflow

c++ c ipc pipe
Rectangle 27 66

Do not forget security

When submitting a form, you're trying to say your browser to send via the HTTP protocol a message on the network properly enveloped in a TCP/IP protocol message structure. When sending data, you can use POST or GET methods to send data using HTTP protocol. POST tells your browser to build an HTTP message and put all content in the body of the message (a very useful way of doing things, more safe and also flexible). GET has some constraints about data representation and length.

When sending a file, it is necessary to tell the HTTP protocol that you are sending a file having several characteristics and information inside it. In this way it is possible to consistently send data to receiver and let it open the file with the current format and so on... This is a requirement from the HTTP protocol as shown here

You cannot send files using default enctype parameters because your receiver might encounter problems reading it (consider that a file is a descriptor for some data for a specific operating system, if you see things this way, maybe you'll understand why it is so important to specify a different enctype for files).

This way of doing things also ensures that some security algorithms work on your messages. This information is also used by application-level routers in order to act as good firewalls for external data.

Well, as you can see, it is not a stupid thing using a specific enctype for files.

header of the message or body of the message?

The information about the enctype is part of the header. If you send a file, the body of the http message is the bytestream of the file.

I agree with @manikanta, I'm pretty sure a POST sends the data in the body of the request

html - What does enctype='multipart/form-data' mean? - Stack Overflow

html http-headers
Rectangle 27 19

CR_SERVER_GONE_ERROR

the client couldn't send a question to the server

In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.

So you've the following possibilities:

Add force option (-f) for mysql to proceed and execute rest of the queries.

~/.my.cnf

Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.

--max-allowed-packet

In general this error could mean several things, such as:

a query to the server is incorrect or too large,

max_allowed_packet

Make sure the variable is under [mysqld] section, not [mysql].

1G
  • Double check the value was set properly by: mysql -sve "SELECT @@max_allowed_packet" # or: mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"

You got a timeout from the TCP/IP connection on the client side.

wait_timeout

You tried to run a query after the connection to the server has been closed.

Solution: A logic error in the application should be corrected.

Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.

Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).

You have encountered a bug where the server died while executing the query.

A client running on a different host does not have the necessary privileges to connect.

Here are few expert-level debug ideas:

  • Check the logs, e.g. sudo tail -f $(mysql -Nse "SELECT @@GLOBAL.log_error")
mysql
telnet
mysql_ping

Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:

sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
strace
dtrace
dtruss
sudo dtruss -a -fn mysqld 2>&1

Source code responsible for throwing the CR_SERVER_GONE_ERROR error (mysqlx.cc):

void Connection::throw_mysqlx_error(const boost::system::error_code &error)
{
  if (!error)
    return;

  switch (error.value())
  {
    // OSX return this undocumented error in case of kernel race-conndition
    // lets ignore it and next call to any io function should return correct
    // error
    case boost::system::errc::wrong_protocol_type:
    return;
    case boost::asio::error::eof:
    case boost::asio::error::connection_reset:
    case boost::asio::error::connection_aborted:
      throw Error(CR_SERVER_GONE_ERROR, "MySQL server has gone away");

    case boost::asio::error::broken_pipe:
      throw Error(CR_BROKEN_PIPE, "MySQL server has gone away");

    default:
      throw Error(CR_UNKNOWN_ERROR, error.message());
  }
}
--skip-extended-insert

mysqli_ping is not works for mysqlnd. see doc.

ERROR 2006 (HY000): MySQL server has gone away - Stack Overflow

mysql
Rectangle 27 156

Lets suppose you have the following HTML document using GET:

You have no greater security provided because the variables are sent over HTTP POST than you have with variables sent over HTTP GET.

<html>
<body>
<form action="http://example.com" method="get">
    User: <input type="text" name="username" /><br/>
    Password: <input type="password" name="password" /><br/>
    <input type="hidden" name="extra" value="lolcatz" />
    <input type="submit"/>
</form>
</body>
</html>
GET /?username=swordfish&password=hunter2&extra=lolcatz HTTP/1.1
 Host: example.com
 Connection: keep-alive
 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/ [...truncated]
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) [...truncated]
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
POST / HTTP/1.1
 Host: example.com
 Connection: keep-alive
 Content-Length: 49
 Cache-Control: max-age=0
 Origin: null
 Content-Type: application/x-www-form-urlencoded
 Accept: application/xml,application/xhtml+xml,text/ [...truncated]
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; [...truncated]
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

 username=swordfish&password=hunter2&extra=lolcatz
  • Included in both examples

Many browsers do not support HTTP methods other than POST/GET.

Many browsers behaviors store the page address, but this doesn't mean you can ignore any of these other issues.

Is one inherently more secure then another? I realize that POST doesn't expose information on the URL but is there any real value in that or is it just security through obscurity? What is the best practice here?

This is correct, because the software you're using to speak HTTP tends to store the request variables with one method but not another only prevents someone from looking at your browser history or some other naive attack from a 10 year old who thinks they understand h4x0r1ng, or scripts that check your history store. If you have a script that can check your history store, you could just as easily have one that checks your network traffic, so this entire security through obscurity is only providing obscurity to script kiddies and jealous girlfriends.

Over https, POST data is encoded, but could urls be sniffed by a 3rd party?

Here's how SSL works. Remember those two requests I sent above? Here's what they look like in SSL: (I changed the page to https://encrypted.google.com/ as example.com doesn't respond on SSL).

q5XQP%RWCd2u#o/T9oiOyR2_YO?yo/3#tR_G7 2_RO8w?FoaObi)
oXpB_y?oO4q?`2o?O4G5D12Aovo?C@?/P/oOEQC5v?vai /%0Odo
QVw#6eoGXBF_o?/u0_F!_1a0A?Q b%TFyS@Or1SR/O/o/_@5o&_o
9q1/?q$7yOAXOD5sc$H`BECo1w/`4?)f!%geOOF/!/#Of_f&AEI#
yvv/wu_b5?/o d9O?VOVOFHwRO/pO/OSv_/8/9o6b0FGOH61O?ti
/i7b?!_o8u%RS/Doai%/Be/d4$0sv_%YD2_/EOAO/C?vv/%X!T?R
_o_2yoBP)orw7H_yQsXOhoVUo49itare#cA?/c)I7R?YCsg ??c'
(_!(0u)o4eIis/S8Oo8_BDueC?1uUO%ooOI_o8WaoO/ x?B?oO@&
Pw?os9Od!c?/$3bWWeIrd_?( `P_C?7_g5O(ob(go?&/ooRxR'u/
T/yO3dS&??hIOB/?/OI?$oH2_?c_?OsD//0/_s%r
rV/O8ow1pc`?058/8OS_Qy/$7oSsU'qoo#vCbOO`vt?yFo_?EYif)
43`I/WOP_8oH0%3OqP_h/cBO&24?'?o_4`scooPSOVWYSV?H?pV!i
?78cU!_b5h'/b2coWD?/43Tu?153pI/9?R8!_Od"(//O_a#t8x?__
bb3D?05Dh/PrS6_/&5p@V f $)/xvxfgO'q@y&e&S0rB3D/Y_/fO?
_'woRbOV?_!yxSOdwo1G1?8d_p?4fo81VS3sAOvO/Db/br)f4fOxt
_Qs3EO/?2O/TOo_8p82FOt/hO?X_P3o"OVQO_?Ww_dr"'DxHwo//P
oEfGtt/_o)5RgoGqui&AXEq/oXv&//?%/6_?/x_OTgOEE%v (u(?/
t7DX1O8oD?fVObiooi'8)so?o??`o"FyVOByY_ Supo? /'i?Oi"4
tr'9/o_7too7q?c2Pv

(note: I converted the HEX to ASCII, some of it should obviously not be displayable)

The entire HTTP conversation is encrypted, the only visible portion of communication is on the TCP/IP layer (meaning the IP address and connection port information).

The only thing that POST is a security measure towards? Protection against your jealous ex flipping through your browser history. That's it. The rest of the world is logged into your account laughing at you.

To further demonstrate why POST isn't secure, Facebook uses POST requests all over the place, so how can software such as FireSheep exist?

Note that you may be attacked with CSRF even if you use HTTPS and your site does not contain XSS vulnerabilities. In short, this attack scenario assumes that the victim (the user of your site or service) is already logged in and has a proper cookie and then the victim's browser is requested to do something with your (supposedly secure) site. If you do not have protection against CSRF the attacker can still execute actions with the victims credentials. The attacker cannot see the server response because it will be transferred to the victim's browser but the damage is usually already done at that point.

A shame you didn't talk about CSRF :-). Is there any way to contact you?

"[...] so this entire security through obscurity is only providing obscurity to script kiddies and jealous girlfriends.[...]" . this entirely depends on the skills of the jealous gf. moreover, no gf/bf should be allowed to visit your browser history. ever. lol.

html - Is either GET or POST more secure than the other? - Stack Overf...

html security http
Rectangle 27 154

Lets suppose you have the following HTML document using GET:

You have no greater security provided because the variables are sent over HTTP POST than you have with variables sent over HTTP GET.

<html>
<body>
<form action="http://example.com" method="get">
    User: <input type="text" name="username" /><br/>
    Password: <input type="password" name="password" /><br/>
    <input type="hidden" name="extra" value="lolcatz" />
    <input type="submit"/>
</form>
</body>
</html>
GET /?username=swordfish&password=hunter2&extra=lolcatz HTTP/1.1
 Host: example.com
 Connection: keep-alive
 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/ [...truncated]
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) [...truncated]
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
POST / HTTP/1.1
 Host: example.com
 Connection: keep-alive
 Content-Length: 49
 Cache-Control: max-age=0
 Origin: null
 Content-Type: application/x-www-form-urlencoded
 Accept: application/xml,application/xhtml+xml,text/ [...truncated]
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; [...truncated]
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

 username=swordfish&password=hunter2&extra=lolcatz
  • Included in both examples

Many browsers do not support HTTP methods other than POST/GET.

Many browsers behaviors store the page address, but this doesn't mean you can ignore any of these other issues.

Is one inherently more secure then another? I realize that POST doesn't expose information on the URL but is there any real value in that or is it just security through obscurity? What is the best practice here?

This is correct, because the software you're using to speak HTTP tends to store the request variables with one method but not another only prevents someone from looking at your browser history or some other naive attack from a 10 year old who thinks they understand h4x0r1ng, or scripts that check your history store. If you have a script that can check your history store, you could just as easily have one that checks your network traffic, so this entire security through obscurity is only providing obscurity to script kiddies and jealous girlfriends.

Over https, POST data is encoded, but could urls be sniffed by a 3rd party?

Here's how SSL works. Remember those two requests I sent above? Here's what they look like in SSL: (I changed the page to https://encrypted.google.com/ as example.com doesn't respond on SSL).

q5XQP%RWCd2u#o/T9oiOyR2_YO?yo/3#tR_G7 2_RO8w?FoaObi)
oXpB_y?oO4q?`2o?O4G5D12Aovo?C@?/P/oOEQC5v?vai /%0Odo
QVw#6eoGXBF_o?/u0_F!_1a0A?Q b%TFyS@Or1SR/O/o/_@5o&_o
9q1/?q$7yOAXOD5sc$H`BECo1w/`4?)f!%geOOF/!/#Of_f&AEI#
yvv/wu_b5?/o d9O?VOVOFHwRO/pO/OSv_/8/9o6b0FGOH61O?ti
/i7b?!_o8u%RS/Doai%/Be/d4$0sv_%YD2_/EOAO/C?vv/%X!T?R
_o_2yoBP)orw7H_yQsXOhoVUo49itare#cA?/c)I7R?YCsg ??c'
(_!(0u)o4eIis/S8Oo8_BDueC?1uUO%ooOI_o8WaoO/ x?B?oO@&
Pw?os9Od!c?/$3bWWeIrd_?( `P_C?7_g5O(ob(go?&/ooRxR'u/
T/yO3dS&??hIOB/?/OI?$oH2_?c_?OsD//0/_s%r
rV/O8ow1pc`?058/8OS_Qy/$7oSsU'qoo#vCbOO`vt?yFo_?EYif)
43`I/WOP_8oH0%3OqP_h/cBO&24?'?o_4`scooPSOVWYSV?H?pV!i
?78cU!_b5h'/b2coWD?/43Tu?153pI/9?R8!_Od"(//O_a#t8x?__
bb3D?05Dh/PrS6_/&5p@V f $)/xvxfgO'q@y&e&S0rB3D/Y_/fO?
_'woRbOV?_!yxSOdwo1G1?8d_p?4fo81VS3sAOvO/Db/br)f4fOxt
_Qs3EO/?2O/TOo_8p82FOt/hO?X_P3o"OVQO_?Ww_dr"'DxHwo//P
oEfGtt/_o)5RgoGqui&AXEq/oXv&//?%/6_?/x_OTgOEE%v (u(?/
t7DX1O8oD?fVObiooi'8)so?o??`o"FyVOByY_ Supo? /'i?Oi"4
tr'9/o_7too7q?c2Pv

(note: I converted the HEX to ASCII, some of it should obviously not be displayable)

The entire HTTP conversation is encrypted, the only visible portion of communication is on the TCP/IP layer (meaning the IP address and connection port information).

The only thing that POST is a security measure towards? Protection against your jealous ex flipping through your browser history. That's it. The rest of the world is logged into your account laughing at you.

To further demonstrate why POST isn't secure, Facebook uses POST requests all over the place, so how can software such as FireSheep exist?

Note that you may be attacked with CSRF even if you use HTTPS and your site does not contain XSS vulnerabilities. In short, this attack scenario assumes that the victim (the user of your site or service) is already logged in and has a proper cookie and then the victim's browser is requested to do something with your (supposedly secure) site. If you do not have protection against CSRF the attacker can still execute actions with the victims credentials. The attacker cannot see the server response because it will be transferred to the victim's browser but the damage is usually already done at that point.

A shame you didn't talk about CSRF :-). Is there any way to contact you?

"[...] so this entire security through obscurity is only providing obscurity to script kiddies and jealous girlfriends.[...]" . this entirely depends on the skills of the jealous gf. moreover, no gf/bf should be allowed to visit your browser history. ever. lol.

html - Is either GET or POST more secure than the other? - Stack Overf...

html security http
Rectangle 27 128

There's no such thing as "safe" or "unsafe" values as such. There are only values that the server controls and values that the user controls and you need to be aware of where a value comes from and hence whether it can be trusted for a certain purpose. $_SERVER['HTTP_FOOBAR'] for example is entirely safe to store in a database, but I most certainly wouldn't eval it.

These variables are set by the server environment and depend entirely on the server configuration.

'GATEWAY_INTERFACE'
'SERVER_ADDR'
'SERVER_SOFTWARE'
'DOCUMENT_ROOT'
'SERVER_ADMIN'
'SERVER_SIGNATURE'

These variables depend on the specific request the client sent, but can only take a limited number of valid values, since all invalid values should be rejected by the web server and not cause the invocation of the script to begin with. Hence they can be considered reliable.

'HTTPS'
'REQUEST_TIME'
'REMOTE_ADDR'
'REMOTE_HOST'
'REMOTE_PORT'
'SERVER_PROTOCOL'
'HTTP_HOST'
'SERVER_NAME'
'SCRIPT_FILENAME'
'SERVER_PORT'
'SCRIPT_NAME'

* The REMOTE_ values are guaranteed to be the valid address of the client, as verified by a TCP/IP handshake. This is the address where any response will be sent to. REMOTE_HOST relies on reverse DNS lookups though and may hence be spoofed by DNS attacks against your server (in which case you have bigger problems anyway). This value may be a proxy, which is a simple reality of the TCP/IP protocol and nothing you can do anything about.

HOST

These values are not checked at all and do not depend on any server configuration, they are entirely arbitrary information sent by the client.

  • 'argv', 'argc' (only applicable to CLI invocation, not usually a concern for web servers)
'REQUEST_METHOD'
'QUERY_STRING'
'HTTP_ACCEPT'
'HTTP_ACCEPT_CHARSET'
'HTTP_ACCEPT_ENCODING'
'HTTP_ACCEPT_LANGUAGE'
'HTTP_CONNECTION'
'HTTP_REFERER'
'HTTP_USER_AGENT'
'AUTH_TYPE'
'PHP_AUTH_DIGEST'
'PHP_AUTH_USER'
'PHP_AUTH_PW'
'PATH_INFO'
'ORIG_PATH_INFO'
'REQUEST_URI'
'PHP_SELF'
'PATH_TRANSLATED'
'HTTP_'

May be considered reliable as long as the web server allows only certain request methods.

May be considered reliable if authentication is handled entirely by the web server.

The superglobal $_SERVER also includes several environment variables. Whether these are "safe" or not depend on how (and where) they are defined. They can range from completely server controlled to completely user controlled.

@Rook But as I said, it absolutely depends on how you use it. Values just by themselves are neither safe nor unsafe, it depends on what you use them for. Even data sent from a nefarious user is perfectly safe as long as you're not doing anything with it that may compromise your security.

@Rook: your idea of "safe" makes this question seem a bit arbitrary, especially since it's entirely tied to an obscure extension or custom version of PHP. While you say "should not have a "shoot from the hip" approach", any answer actually seems to require at minimum familiarity with PHP sourcecode to find out how these values are set. Would emailing PHP devs not be a better approach to finding an answer?

@Rook: Miscommunication. As deceze hinted at, "safe for what purpose". As I hinted at, your purpose is unknown, and besides there are several other undocumented $_SERVER values depending on how the file is served. In my opinion, the documented ones don't clarify the true source. Otherwise I believe you wouldn't be asking this question. Glad you got a list you can use. But I'd still suggest submitting a bug report (when their bug site is fixed), sending doc maintainers an email, or updating the docs yourself (if you're privy to the link). It would benefit the community to know this info.

SERVER_NAME is not necessarily controlled by the server. Depending on gateway and settings it may be duplicated from HTTP_HOST and hence subject to the same caveat.

SERVER_PORT

php - Which $_SERVER variables are safe? - Stack Overflow

php security
Rectangle 27 127

There's no such thing as "safe" or "unsafe" values as such. There are only values that the server controls and values that the user controls and you need to be aware of where a value comes from and hence whether it can be trusted for a certain purpose. $_SERVER['HTTP_FOOBAR'] for example is entirely safe to store in a database, but I most certainly wouldn't eval it.

These variables are set by the server environment and depend entirely on the server configuration.

'GATEWAY_INTERFACE'
'SERVER_ADDR'
'SERVER_SOFTWARE'
'DOCUMENT_ROOT'
'SERVER_ADMIN'
'SERVER_SIGNATURE'

These variables depend on the specific request the client sent, but can only take a limited number of valid values, since all invalid values should be rejected by the web server and not cause the invocation of the script to begin with. Hence they can be considered reliable.

'HTTPS'
'REQUEST_TIME'
'REMOTE_ADDR'
'REMOTE_HOST'
'REMOTE_PORT'
'SERVER_PROTOCOL'
'HTTP_HOST'
'SERVER_NAME'
'SCRIPT_FILENAME'
'SERVER_PORT'
'SCRIPT_NAME'

* The REMOTE_ values are guaranteed to be the valid address of the client, as verified by a TCP/IP handshake. This is the address where any response will be sent to. REMOTE_HOST relies on reverse DNS lookups though and may hence be spoofed by DNS attacks against your server (in which case you have bigger problems anyway). This value may be a proxy, which is a simple reality of the TCP/IP protocol and nothing you can do anything about.

HOST

These values are not checked at all and do not depend on any server configuration, they are entirely arbitrary information sent by the client.

  • 'argv', 'argc' (only applicable to CLI invocation, not usually a concern for web servers)
'REQUEST_METHOD'
'QUERY_STRING'
'HTTP_ACCEPT'
'HTTP_ACCEPT_CHARSET'
'HTTP_ACCEPT_ENCODING'
'HTTP_ACCEPT_LANGUAGE'
'HTTP_CONNECTION'
'HTTP_REFERER'
'HTTP_USER_AGENT'
'AUTH_TYPE'
'PHP_AUTH_DIGEST'
'PHP_AUTH_USER'
'PHP_AUTH_PW'
'PATH_INFO'
'ORIG_PATH_INFO'
'REQUEST_URI'
'PHP_SELF'
'PATH_TRANSLATED'
'HTTP_'

May be considered reliable as long as the web server allows only certain request methods.

May be considered reliable if authentication is handled entirely by the web server.

The superglobal $_SERVER also includes several environment variables. Whether these are "safe" or not depend on how (and where) they are defined. They can range from completely server controlled to completely user controlled.

@Rook But as I said, it absolutely depends on how you use it. Values just by themselves are neither safe nor unsafe, it depends on what you use them for. Even data sent from a nefarious user is perfectly safe as long as you're not doing anything with it that may compromise your security.

@Rook: your idea of "safe" makes this question seem a bit arbitrary, especially since it's entirely tied to an obscure extension or custom version of PHP. While you say "should not have a "shoot from the hip" approach", any answer actually seems to require at minimum familiarity with PHP sourcecode to find out how these values are set. Would emailing PHP devs not be a better approach to finding an answer?

@Rook: Miscommunication. As deceze hinted at, "safe for what purpose". As I hinted at, your purpose is unknown, and besides there are several other undocumented $_SERVER values depending on how the file is served. In my opinion, the documented ones don't clarify the true source. Otherwise I believe you wouldn't be asking this question. Glad you got a list you can use. But I'd still suggest submitting a bug report (when their bug site is fixed), sending doc maintainers an email, or updating the docs yourself (if you're privy to the link). It would benefit the community to know this info.

SERVER_NAME is not necessarily controlled by the server. Depending on gateway and settings it may be duplicated from HTTP_HOST and hence subject to the same caveat.

SERVER_PORT

php - Which $_SERVER variables are safe? - Stack Overflow

php security
Rectangle 27 58

The benefit of WireShark is that it could possibly show you errors in levels below the HTTP protocol. Fiddler will show you errors in the HTTP protocol.

If you think the problem is somewhere in the HTTP request issued by the browser, or you are just looking for more information in regards to what the server is responding with, or how long it is taking to respond, Fiddler should do.

If you suspect something may be wrong in the TCP/IP protocol used by your browser and the server (or in other layers below that), go with WireShark.

Indeed, Wireshark can uncover proxy and nat server issues, it also can be used on both the client you are connection from as on the server.

debugging - Wireshark vs Firebug vs Fiddler - pros and cons? - Stack O...

debugging web-applications firebug fiddler wireshark
Rectangle 27 39

Only silence the password request

Questions about login without password keep popping up. Keep reading, the best options come last. But let's clarify a couple of things first.

If your issue is only the password prompt, you can silence it. I quote the manual here:

-w
--no-password

Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password. (...)

Normally this is unnecessary. The default database superuser postgres usually corresponds to the system user of the same name. Running psql from this account doesn't require a password if the authentication method peer or ident are set in your pg_hba.conf file. You probably have a line like this:

local    all    postgres    peer
local    all    all         peer

This means, every local user can log into a all database as database user of the same name without password.However, there is a common misconception here. Quoting again:

Bold emphasis mine. You are connecting to localhost, which is not a "local connection", even though it has the word "local" in it. It's a TCP/IP connection to 127.0.0.1. Wikipedia on localhost:

On modern computer systems, localhost as a hostname translates to an IPv4 address in the 127.0.0.0/8 (loopback) net block, usually 127.0.0.1, or ::1 in IPv6.

Omit the parameter -h from the psql invocation. Quoting the manual on psql once more:

If you omit the host name, psql will connect via a Unix-domain socket to a server on the local host, or via TCP/IP to localhost on machines that don't have Unix-domain sockets.

... doesn't have Unix-domain sockets, pg_hba.conf lines starting with local are not applicable on Windows. On Windows you connect via localhost by default, which brings us back to the start.

If your security requirements are lax, you could just trust all connections via localhost:

I would only do that for debugging with remote connections off. For some more security you can use SSPI authentication on Windows. Add this line to pg_hba.conf for "local" connections:

host    all    all    127.0.0.1/32     sspi

You could set an environment variable, but this is discouraged, especially for Windows. The manual:

PGPASSWORD behaves the same as the password connection parameter. Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file (see Section 32.15).

psql
conninfo
$ psql "user=myuser password=secret_pw host=localhost port=5432 sslmode=require"

Or a URI, which is used instead of a database name:

$ psql postgresql://myuser:secret_pw@localhost:5432/mydb?sslmode=require

But it's usually preferable to set up a .pgpass file rather than putting passwords into script files. Read the short chapter in the manual carefully. In particular, note that here ...

A host name of localhost matches both TCP (host name localhost) and Unix domain socket (pghost empty or the default socket directory) connections coming from the local machine.

Exact path depends on the system. This file can passwords for multiple combinations of role and port (DB cluster):

localhost:5432:*:myadmin:myadminPasswd
localhost:5434:*:myadmin:myadminPasswd
localhost:5437:*:myadmin:myadminPasswd
...
C:\Documents and Settings\My_Windows_User_Name\Application Data\postgresql

postgresql - Run batch file with psql command without password - Stack...

postgresql shell connection psql
Rectangle 27 1

I don't think you want custom binding, as much as you want to customize the out of the box bindings. Unless your intention was to create a propietary communication protocol, outside the the TCP/IP etc.

For the security issue, you would want to look into setting Security.Mode properties as well as assigning the right transport and/or message security properties. eg. use certificate or password challenge, encrypt, encrypt and sign etc.

You'll also need to do the same on the client side. The binding should be almost identical to that of the server side.

If you don't like basicHttp, there's always TCP, MSMQ, named piped and so on. You should look it up to get the full list.

c# - How to connect to a WCF service with Custom Binding from unmanage...

c# c++ wcf c++-cli
Rectangle 27 1

I don't think you want custom binding, as much as you want to customize the out of the box bindings. Unless your intention was to create a propietary communication protocol, outside the the TCP/IP etc.

For the security issue, you would want to look into setting Security.Mode properties as well as assigning the right transport and/or message security properties. eg. use certificate or password challenge, encrypt, encrypt and sign etc.

You'll also need to do the same on the client side. The binding should be almost identical to that of the server side.

If you don't like basicHttp, there's always TCP, MSMQ, named piped and so on. You should look it up to get the full list.

c# - How to connect to a WCF service with Custom Binding from unmanage...

c# c++ wcf c++-cli
Rectangle 27 12

Any modern single server is able to server thousands of clients at once. Its HTTP server software has just to be is Event-Driven (IOCP) oriented (we are not in the old Apache one connection = one thread/process equation any more). Even the HTTP server built in Windows (http.sys) is IOCP oriented and very efficient (running in kernel mode). From this point of view, there won't be a lot of difference at scaling between WebSockets and regular HTTP connection. One TCP/IP connection uses a little resource (much less than a thread), and modern OS are optimized for handling a lot of concurrent connections: WebSockets and HTTP are just OSI 7 application layer protocols, inheriting from this TCP/IP specifications.

But, from experiment, I've seen two main problems with WebSockets:

  • They do not support CDN;
  • They have potential security issues.
  • Use WebSockets for client notifications only (with a fallback mechanism to long-polling - there are plenty of libraries around);
  • Use RESTful / JSON for all other data, using a CDN or proxies for cache.

In practice, full WebSockets applications do not scale well. Just use WebSockets for what they were designed to: push notifications from the server to the client.

About the potential problems of using WebSockets:

Of course, you won't put all your data on your CDN cache, but in practice, a lot of common content won't change often. I suspect that 80% of your REST resources may be cached... Even a one minute (or 30 seconds) CDN expiration timeout may be enough to give your central server a new live, and enhance the application responsiveness a lot, since CDN can be geographically tuned...

To my knowledge, there is no WebSockets support in CDN yet, and I suspect it would never be. WebSockets are statefull, whereas HTTP is stateless, so is much easily cached. In fact, to make WebSockets CDN-friendly, you may need to switch to a stateless RESTful approach... which would not be WebSockets any more.

WebSockets have potential security issues, especially about DOS attacks. For illustration about new security vulnerabilities , see this set of slides and this webkit ticket.

WebSockets avoid any chance of packet inspection at OSI 7 application layer level, which comes to be pretty standard nowadays, in any business security. In fact, WebSockets makes the transmission obfuscated, so may be a major breach of security leak.

@ArnaudBouchez - +1 for the nice exposition on CDN. Quick follow up question - what do you think of the feasibility of Event delivery networks? Patterned after CDNs but geared toward delivering streaming data etc over websockets or some other as yet unseen technology.

html5 - Do HTML WebSockets maintain an open connection for each client...

html5 websocket
Rectangle 27 4

Normally there won't be several "first" requests. The browser needs to get the page it's displaying first, and then as it's parsing that page, will request resources referenced by that page (images, style sheets, JavaScript, etc.), usually when it encounters them (although there are various ways to modify that) and in parallel (up to some browser-specific limit) and often on the same TCP/IP connection (depending on the browser and server). So the first page request should set the session ID, and subsequent requests will have the session cookie.

java - How does servlet container instantiate session for several "fir...

java session servlets
Rectangle 27 2

In this example you're using a File System Context as the JNDI provider. The JMS objects are being stored in a flat file format in the c:/jndi directory. As you've done you can look at this file in a text editor, it's not that easy to read but you'll be able to see some elements of the object. As an aside I would recommend using the WMQ Explorer as the admin tool of choice here - that can read and update any JNDI including File System Context.

The last line is doing a lookup of an object with the name "UM_QMGR_QCF". This is only doing a lookup of the object. It wouldn't connect to the QueueManager to do this, and creation of a connection factory object won't create a connection back to the QueueManager.

The error that is being seen would come from the createConnection call. The error implies that the userid/password supplied on the createConnection call doesn't match or is not authenticated with what ever security is setup on the QM.

That error isn't connected with the SSL setup on the TCP/IP link.

I'd suggest validating where the exception is coming from - also try just doing a System.out.println() on the object that comes back from JNDI. All WMQ Admin objects will format themselves via a built in toString()

ibm mq - WMQ JNDI look up with authentication - Stack Overflow

ibm-mq
Rectangle 27 15

SLEEP 5 was included in some of the Windows Resource Kits.

TIMEOUT 5 was included in some of the Windows Resource Kits, but is now a standard command in Windows 7 and 8 (not sure about Vista).

PING 1.1.1.1 -n 1 -w 5000 >NUL For any MS-DOS or Windows version with a TCP/IP client, PING can be used to delay execution for a number of seconds.

NETSH badcommand
CHOICE

How to sleep for 5 seconds in Windows's Command Prompt? (or DOS) - Sta...

windows command-line batch-file sleep