A collection of tips and facts on optimizing PHP scripts. Mastering cURL A few words about other useful cURL options

How to search using google.com

Everyone probably knows how to use a search engine like Google =) But not everyone knows that if you correctly compose a search query using special structures, you can achieve the results of what you are looking for much more efficiently and faster =) In this article I will try to show that and how you need to do to search correctly

Google supports several advanced search operators that have special meaning when searching on google.com. Typically, these operators change the search, or even tell Google to do the whole thing. different types search. For example, the construction link: is a special operator, and the query link:www.google.com will not give you a normal search, but will instead find all web pages that have links to google.com.
alternative request types

cache: If you include other words in the query, Google will highlight those included words within the cached document.
For example, cache:www.web site will show cached content with the word "web" highlighted.

link: the above search query will show web pages that contain links to the specified query.
For example: link:www.website will display all pages that have a link to http://www.site

related: Displays web pages that are "related" to the specified web page.
For example, related: www.google.com will list web pages that are similar to the Google home page.

info: Request Information: will provide some information that Google has about the requested web page.
For example, info:website will show information about our forum =) (Armada - Forum of adult webmasters).

Other information requests

define: The define: query will provide a definition of the words you type after this, compiled from various online sources. The definition will be for the entire phrase entered (that is, it will include all words in the exact query).

stocks: If you start a query with stocks: Google will treat the rest of the query terms as stock tickers, and link to a page showing the prepared information for those characters.
For example, stocks: intel yahoo will show information about Intel and Yahoo. (Note that you must print breaking news characters, not the company name)

Request Modifiers

site: If you include site: in your query, Google will limit the results to the websites it finds in that domain.
You can also search for individual zones, such as ru, org, com, etc ( site:com site:ru)

allintitle: If you run a query with allintitle:, Google will limit the results with all the query words in the title.
For example, allintitle: google search will return all Google search pages like images, Blog, etc

title: If you include intitle: in your query, Google will restrict results to documents containing that word in the title.
For example, title:Business

allinurl: If you run a query with allinurl: Google will limit the results with all the query words in the URL.
For example, allinurl: google search will return documents with google and search in the title. Also, as an option, you can separate words with a slash (/) then the words on both sides of the slash will be searched within the same page: Example allinurl: foo/bar

inurl: If you include inurl: in your query, Google will limit the results to documents containing that word in the URL.
For example, Animation inurl:website

intext: searches only in the text of the page for the specified word, ignoring the title and texts of links, and other things not related to. There is also a derivative of this modifier - allintext: those. further, all words in the query will be searched only in the text, which is also important, ignoring frequently used words in links
For example, intext:forum

daterange: searches in time frames (daterange:2452389-2452389), dates for time are specified in Julian format.

Well, and all sorts of interesting examples of requests

Examples of compiling queries for Google. For spammers

inurl:control.guest?a=sign

Site:books.dreambook.com “Homepage URL” “Sign my” inurl:sign

Site:www.freegb.net Homepage

Inurl:sign.asp "Character Count"

"Message:" inurl:sign.cfm "Sender:"

inurl:register.php “User Registration” “Website”

Inurl:edu/guestbook “Sign the Guestbook”

Inurl:post "Post Comment" "URL"

Inurl:/archives/ “Comments:” “Remember info?”

“Script and Guestbook Created by:” “URL:” “Comments:”

inurl:?action=add “phpBook” “URL”

Intitle:"Submit New Story"

Magazines

inurl:www.livejournal.com/users/mode=reply

inurl greatestjournal.com/mode=reply

Inurl:fastbb.ru/re.pl?

inurl:fastbb.ru /re.pl? "Guest book"

Blogs

Inurl:blogger.com/comment.g?”postID”"anonymous"

Inurl:typepad.com/ “Post a comment” “Remember personal info?”

Inurl:greatestjournal.com/community/ “Post comment” “addresses of anonymous posters”

“Post comment” “addresses of anonymous posters” -

Intitle:"Post comment"

Inurl:pirillo.com “Post comment”

Forums

Inurl:gate.html?”name=Forums” “mode=reply”

inurl:”forum/posting.php?mode=reply”

inurl:”mes.php?”

inurl:”members.html”

inurl:forum/memberlist.php?”

cURL is a special tool that is designed to transfer files and data using the URL syntax. This technology supports many protocols such as HTTP, FTP, TELNET and many more. cURL was originally designed to be a tool command line. Luckily for us, the cURL library is supported by the PHP programming language. In this article, we will look at some of the advanced cURL features, as well as touch on the practical application of the acquired knowledge using PHP.

Why cURL?

In fact, there are many alternative ways fetching the content of a web page. In many cases, mostly out of laziness, I have used simple PHP functions instead of cURL:

$content = file_get_contents("http://www.nettuts.com"); // or $lines = file("http://www.nettuts.com"); // or readfile("http://www.nettuts.com");

However, these functions have virtually no flexibility and contain a huge number of shortcomings in terms of error handling and so on. In addition, there are certain tasks that you simply cannot solve with these standard functions: interacting with cookies, authentication, submitting a form, uploading files, and so on.

cURL is a powerful library that supports many different protocols, options, and provides detailed information about URL requests.

Basic structure

  • Initialization
  • Assigning parameters
  • Execution and fetching the result
  • Freeing up memory

// 1. initialization $ch = curl_init(); // 2. specify options, including url curl_setopt($ch, CURLOPT_URL, "http://www.nettuts.com"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); // 3. get HTML as result $output = curl_exec($ch); // 4. close the connection curl_close($ch);

Step #2 (that is, calling curl_setopt()) will be discussed in this article much more than all the other steps, because. at this stage, all the most interesting and useful things that you need to know happen. There are a huge number of different options in cURL that must be specified in order to be able to configure the URL request in the most thorough way. We will not consider the entire list as a whole, but will focus only on what I consider necessary and useful for this lesson. Everything else you can explore yourself if this topic interests you.

Error Check

Additionally, you can also use conditional statements to check if the operation was successful:

// ... $output = curl_exec($ch); if ($output === FALSE) ( echo "cURL Error: " . curl_error($ch); ) // ...

Here I ask you to note a very important point for yourself: we must use “=== false” for comparison, instead of “== false”. For those who are not in the know, this will help us distinguish between an empty result and a false boolean value, which will indicate an error.

Receiving the information

Another additional step is to get data about the cURL request after it has been executed.

// ... curl_exec($ch); $info = curl_getinfo($ch); echo "Took" . $info["total_time"] . " seconds for url " . $info["url"]; // ...

The returned array contains the following information:

  • "url"
  • "content_type"
  • http_code
  • “header_size”
  • "request_size"
  • “filetime”
  • “ssl_verify_result”
  • “redirect_count”
  • “total_time”
  • “namelookup_time”
  • “connect_time”
  • "pretransfer_time"
  • "size_upload"
  • size_download
  • “speed_download”
  • “speed_upload”
  • "download_content_length"
  • “upload_content_length”
  • "starttransfer_time"
  • "redirect_time"

Redirect detection depending on the browser

In this first example, we will write code that can detect URL redirects based on various browser settings. For example, some websites redirect browsers cell phone, or any other device.

We're going to use the CURLOPT_HTTPHEADER option to determine our outgoing HTTP headers, including the user's browser name and available languages. Eventually, we will be able to determine which sites are redirecting us to different URLs.

// test URL $urls = array("http://www.cnn.com", "http://www.mozilla.com", "http://www.facebook.com"); // testing browsers $browsers = array("standard" => array ("user_agent" => "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5 .6 (.NET CLR 3.5.30729)", "language" => "en-us,en;q=0.5"), "iphone" => array ("user_agent" => "Mozilla/5.0 (iPhone; U ; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A537a Safari/419.3", "language" => "en"), "french" => array ("user_agent" => "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 2.0.50727)", "language" => "fr,fr-FR;q=0.5")); foreach ($urls as $url) ( echo "URL: $url\n"; foreach ($browsers as $test_name => $browser) ( $ch = curl_init(); // specify url curl_setopt($ch, CURLOPT_URL, $url); // set browser headers curl_setopt($ch, CURLOPT_HTTPHEADER, array("User-Agent: ($browser["user_agent"])", "Accept-Language: ($browser["language"])" )); // we don't need page content curl_setopt($ch, CURLOPT_NOBODY, 1); // we need to get HTTP headers curl_setopt($ch, CURLOPT_HEADER, 1); // return results instead of output curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); // Was there an HTTP redirect? if (preg_match("!Location: (.*)!", $output, $matches)) ( echo " $test_name: redirects to $matches\n"; ) else ( echo "$test_name: no redirection\n"; ) ) echo "\n\n"; )

First, we specify a list of URLs of sites that we will check. More precisely, we need the addresses of these sites. Next, we need to define browser settings in order to test each of these URLs. After that, we will use a loop in which we will run through all the results obtained.

The trick we use in this example to set the cURL settings will allow us to get not the content of the page, but only the HTTP headers (stored in $output). Next, using a simple regex, we can determine if the string “Location:” was present in the received headers.

When you run this code, you should get something like this:

Making a POST request to a specific URL

When forming a GET request, the transmitted data can be passed to the URL via a “query string”. For example, when you do a Google search, the search term is placed in the address bar of the new URL:

http://www.google.com/search?q=ruseller

In order to imitate given request, you don't need to use cURL. If laziness finally overcomes you, use the “file_get_contents ()” function in order to get the result.

But the thing is, some HTML forms send POST requests. The data of these forms is transported through the body of the HTTP request, and not as in the previous case. For example, if you filled out a form on a forum and clicked on the search button, then most likely a POST request will be made:

http://codeigniter.com/forums/do_search/

We can write a PHP script that can simulate this kind of URL request. First let's create a simple accept and display file POST data. Let's call it post_output.php:

Print_r($_POST);

We then create a PHP script to execute the cURL request:

$url = "http://localhost/post_output.php"; $post_data = array("foo" => "bar", "query" => "Nettuts", "action" => "Submit"); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // indicate that we have a POST request curl_setopt($ch, CURLOPT_POST, 1); // add variables curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data); $output = curl_exec($ch); curl_close($ch); echo $output;

When you run this script, you should get a similar result:

So the POST request was sent to the post_output.php script, which in turn outputs superglobal array$_POST, the content of which we got using cURL.

File upload

First, let's create a file in order to form it and send it to the upload_output.php file:

Print_r($_FILES);

And here is the script code that performs the above functionality:

$url = "http://localhost/upload_output.php"; $post_data = array ("foo" => "bar", // file to upload "upload" => "@C:/wamp/www/test.zip"); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data); $output = curl_exec($ch); curl_close($ch); echo $output;

When you want to upload a file, all you have to do is pass it in as a regular post variable, preceded by the @ symbol. When you run the written script, you will get the following result:

Multiple cURL

One of the biggest strengths of cURL is the ability to create "multiple" cURL handlers. This allows you to open a connection to multiple URLs at the same time and asynchronously.

In the classic version of the cURL request, the execution of the script is suspended, and the URL request operation is expected to complete, after which the script can continue. If you intend to interact with a whole lot of URLs, this will be quite time consuming, since in the classic case you can only work with one URL at a time. However, we can fix this situation by using special handlers.

Let's take a look at the code example I took from php.net:

// create some cURL resources $ch1 = curl_init(); $ch2 = curl_init(); // specify URL and other parameters curl_setopt($ch1, CURLOPT_URL, "http://lxr.php.net/"); curl_setopt($ch1, CURLOPT_HEADER, 0); curl_setopt($ch2, CURLOPT_URL, "http://www.php.net/"); curl_setopt($ch2, CURLOPT_HEADER, 0); //create a multiple cURL handler $mh = curl_multi_init(); //adding multiple handlers curl_multi_add_handle($mh,$ch1); curl_multi_add_handle($mh,$ch2); $active = null; //execution do ( $mrc ​​= curl_multi_exec($mh, $active); ) while ($mrc == CURLM_CALL_MULTI_PERFORM); while ($active && $mrc ​​== CURLM_OK) ( if (curl_multi_select($mh) != -1) ( do ( $mrc ​​= curl_multi_exec($mh, $active); ) while ($mrc == CURLM_CALL_MULTI_PERFORM); ) ) //close curl_multi_remove_handle($mh, $ch1); curl_multi_remove_handle($mh, $ch2); curl_multi_close($mh);

The idea is that you can use multiple cURL handlers. Using a simple loop, you can keep track of which requests have not yet been completed.

In this example, there are two main loops. First do-while loop calls the curl_multi_exec() function. This feature is non-blocking. It executes as fast as it can and returns the state of the request. While the returned value is the constant 'CURLM_CALL_MULTI_PERFORM', this means that the work has not been completed yet (for example, http headers in the URL are currently being sent); That is why we keep checking this return value until we get a different result.

In the next loop, we check the condition while $active = "true". It is the second parameter to the curl_multi_exec() function. The value of this variable will be equal to "true", as long as any of existing changes is active. Next, we call the curl_multi_select() function. Its execution "blocks" as long as there is at least one active connection, until a response is received. When this happens, we return to the main loop to continue executing queries.

And now let's apply what we learned with an example that will be really useful for a large number of people.

Checking Links in WordPress

Imagine a blog with a huge number of posts and messages, each of which has links to external Internet resources. Some of these links might already be "dead" for various reasons. Perhaps the page has been deleted or the site is not working at all.

We're going to create a script that will parse all links and find websites that aren't loading and 404 pages, and then provide us with a very detailed report.

I will say right away that this is not an example of creating a plugin for WordPress. This is just about everything a good testing ground for us.

Let's finally get started. First we have to fetch all links from the database:

// configuration $db_host = "localhost"; $db_user = "root"; $db_pass = ""; $db_name = "wordpress"; $excluded_domains = array("localhost", "www.mydomain.com"); $max_connections = 10; // variable initialization $url_list = array(); $working_urls = array(); $dead_urls = array(); $not_found_urls = array(); $active = null; // connect to MySQL if (!mysql_connect($db_host, $db_user, $db_pass)) ( die("Could not connect: " . mysql_error()); ) if (!mysql_select_db($db_name)) ( die("Could not select db: " . mysql_error()); ) // select all published posts with links $q = "SELECT post_content FROM wp_posts WHERE post_content LIKE "%href=%" AND post_status = "publish" AND post_type = "post ""; $r = mysql_query($q) or die(mysql_error()); while ($d = mysql_fetch_assoc($r)) ( // fetch links using regular expressions if (preg_match_all("!href=\"(.*?)\"!", $d["post_content"], $matches)) ( foreach ($matches as $url) ( $tmp = parse_url($url) ; if (in_array($tmp["host"], $excluded_domains)) ( continue; ) $url_list = $url; ) ) ) // remove duplicates $url_list = array_values(array_unique($url_list)); if (!$url_list) ( die("No URL to check"); )

First, we generate configuration data for interacting with the database, then we write a list of domains that will not participate in the check ($excluded_domains). We also define a number that characterizes the number of maximum simultaneous connections that we will use in our script ($max_connections). We then join the database, select the posts that contain links, and accumulate them into an array ($url_list).

The following code is a bit complex, so understand it from start to finish:

// 1. multiple handler $mh = curl_multi_init(); // 2. add a lot of URLs for ($i = 0; $i< $max_connections; $i++) { add_url_to_multi_handle($mh, $url_list); } // 3. инициализация выполнения do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); // 4. основной цикл while ($active && $mrc == CURLM_OK) { // 5. если всё прошло успешно if (curl_multi_select($mh) != -1) { // 6. делаем дело do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); // 7. если есть инфа? if ($mhinfo = curl_multi_info_read($mh)) { // это значит, что запрос завершился // 8. извлекаем инфу $chinfo = curl_getinfo($mhinfo["handle"]); // 9. мёртвая ссылка? if (!$chinfo["http_code"]) { $dead_urls = $chinfo["url"]; // 10. 404? } else if ($chinfo["http_code"] == 404) { $not_found_urls = $chinfo["url"]; // 11. рабочая } else { $working_urls = $chinfo["url"]; } // 12. чистим за собой curl_multi_remove_handle($mh, $mhinfo["handle"]); // в случае зацикливания, закомментируйте данный вызов curl_close($mhinfo["handle"]); // 13. добавляем новый url и продолжаем работу if (add_url_to_multi_handle($mh, $url_list)) { do { $mrc = curl_multi_exec($mh, $active); } while ($mrc == CURLM_CALL_MULTI_PERFORM); } } } } // 14. завершение curl_multi_close($mh); echo "==Dead URLs==\n"; echo implode("\n",$dead_urls) . "\n\n"; echo "==404 URLs==\n"; echo implode("\n",$not_found_urls) . "\n\n"; echo "==Working URLs==\n"; echo implode("\n",$working_urls); function add_url_to_multi_handle($mh, $url_list) { static $index = 0; // если у нас есть ещё url, которые нужно достать if ($url_list[$index]) { // новый curl обработчик $ch = curl_init(); // указываем url curl_setopt($ch, CURLOPT_URL, $url_list[$index]); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_multi_add_handle($mh, $ch); // переходим на следующий url $index++; return true; } else { // добавление новых URL завершено return false; } }

Here I will try to put everything on the shelves. The numbers in the list correspond to the numbers in the comment.

  1. 1. Create a multiple handler;
  2. 2. We will write the add_url_to_multi_handle() function a little later. Each time it is called, a new url will be processed. Initially, we add 10 ($max_connections) URLs;
  3. 3. In order to get started, we must run the curl_multi_exec() function. As long as it returns CURLM_CALL_MULTI_PERFORM, we still have some work to do. We need this mainly in order to create connections;
  4. 4. Next comes the main loop, which will be executed as long as we have at least one active connection;
  5. 5. curl_multi_select() hangs waiting for URL lookup to complete;
  6. 6. Once again, we need to get cURL to do some work, namely to fetch the returned response data;
  7. 7. Information is being verified here. As a result of the request, an array will be returned;
  8. 8. The returned array contains a cURL handler. This is what we'll use to fetch information about a particular cURL request;
  9. 9. If the link was dead, or the script ran out of time, then we should not look for any http code;
  10. 10. If the link returned us a 404 page, then the http code will contain the value 404;
  11. 11. Otherwise, we have a working link in front of us. (You can add additional checks for error code 500, etc...);
  12. 12. Next, we remove the cURL handler because we don't need it anymore;
  13. 13. Now we can add another url and run everything we talked about before;
  14. 14. At this step, the script ends its work. We can remove everything we don't need and generate a report;
  15. 15. In the end, we will write a function that will add a url to the handler. The static variable $index will be incremented every time this function is called.

I used this script on my blog (with some broken links added on purpose to test it out) and got the following result:

In my case, the script took just under 2 seconds to run through 40 URLs. The performance gain is significant when dealing with even more URLs. If you open ten connections at the same time, the script can run ten times faster.

A few words about other useful cURL options

HTTP Authentication

If on URL address there is HTTP authentication, then you can easily use the following script:

$url = "http://www.somesite.com/members/"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // specify username and password curl_setopt($ch, CURLOPT_USERPWD, "myusername:mypassword"); // if the redirect is allowed curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); // then save our data in cURL curl_setopt($ch, CURLOPT_UNRESTRICTED_AUTH, 1); $output = curl_exec($ch); curl_close($ch);

FTP upload

PHP also has a library for working with FTP, but nothing prevents you from using cURL tools here:

// open file $file = fopen("/path/to/file", "r"); // url should contain the following content $url = "ftp://username: [email protected]:21/path/to/new/file"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_UPLOAD, 1); curl_setopt($ch, CURLOPT_INFILE, $fp); curl_setopt($ch, CURLOPT_INFILESIZE, filesize("/path/to/file")); // specify ASCII mod curl_setopt($ch, CURLOPT_FTPASCII, 1); $output = curl_exec ($ch); curl_close($ch);

Using a Proxy

You can make your URL request through a proxy:

$ch = curl_init(); curl_setopt($ch, CURLOPT_URL,"http://www.example.com"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // specify the address curl_setopt($ch, CURLOPT_PROXY, "11.11.11.11:8080"); // if you need to provide a username and password curl_setopt($ch, CURLOPT_PROXYUSERPWD,"user:pass"); $output = curl_exec($ch); curl_close($ch);

Callbacks

It is also possible to specify a function that will be triggered even before the cURL request completes. For example, while the content of a response is loading, you can start using the data without waiting for it to fully load.

$ch = curl_init(); curl_setopt($ch, CURLOPT_URL,"http://net.tutsplus.com"); curl_setopt($ch, CURLOPT_WRITEFUNCTION,"progress_function"); curl_exec($ch); curl_close($ch); function progress_function($ch,$str) ( echo $str; return strlen($str); )

Such a function MUST return the length of the string, which is a requirement.

Conclusion

Today we got acquainted with how you can use the cURL library for your own selfish purposes. I hope you enjoyed this article.

Thanks! Have a good day!

And so, now I will talk about how to hack something without special knowledge of anything. I say right away, there is little benefit from this, but still.
First, you need to find the sites themselves. To do this, go to google.com and search for dorks

Inurl:pageid= inurl:games.php?id= inurl:page.php?file= inurl:newsDetail.php?id= inurl:gallery.php?id= inurl:article.php?id= inurl:show.php? id= inurl:staff_id= inurl:newsitem.php?num= inurl:readnews.php?id= inurl:top10.php?cat= inurl:historialeer.php?num= inurl:reagir.php?num= inurl:Stray- Questions-View.php?num= inurl:forum_bds.php?num= inurl:game.php?id= inurl:view_product.php?id= inurl:newsone.php?id= inurl:sw_comment.php?id= inurl: news.php?id= inurl:avd_start.php?avd= inurl:event.php?id= inurl:product-item.php?id= inurl:sql.php?id= inurl:news_view.php?id= inurl: select_biblio.php?id= inurl:humor.php?id= inurl:aboutbook.php?id= inurl:ogl_inet.php?ogl_id= inurl:fiche_spectacle.php?id= inurl:communique_detail.php?id= inurl:sem. php3?id= inurl:kategorie.php4?id= inurl:news.php?id= inurl:index.php?id= inurl:faq2.php?id= inurl:show_an.php?id= inurl:preview.php? id= inurl:loadpsb.php?id= inurl:opinions.php?id= inurl:spr.php?id= inurl:pages.php?id= inurl:announce.php?id= inurl:clanek.php4?id= i nurl:participant.php?id= inurl:download.php?id= inurl:main.php?id= inurl:review.php?id= inurl:chappies.php?id= inurl:read.php?id= inurl: prod_detail.php?id= inurl:viewphoto.php?id= inurl:article.php?id= inurl:person.php?id= inurl:productinfo.php?id= inurl:showimg.php?id= inurl:view. php?id= inurl:website.php?id= inurl:hosting_info.php?id= inurl:gallery.php?id= inurl:rub.php?idr= inurl:view_faq.php?id= inurl:artikelinfo.php? id= inurl:detail.php?ID= inurl:index.php?= inurl:profile_view.php?id= inurl:category.php?id= inurl:publications.php?id= inurl:fellows.php?id= inurl :downloads_info.php?id= inurl:prod_info.php?id= inurl:shop.php?do=part&id= inurl:productinfo.php?id= inurl:collectionitem.php?id= inurl:band_info.php?id= inurl :product.php?id= inurl:releases.php?id= inurl:ray.php?id= inurl:produit.php?id= inurl:pop.php?id= inurl:shopping.php?id= inurl:productdetail .php?id= inurl:post.php?id= inurl:viewshowdetail.php?id= inurl:clubpage.php?id= inurl:memberInfo.php?id= inurl:section.php?id= in url:theme.php?id= inurl:page.php?id= inurl:shredder-categories.php?id= inurl:tradeCategory.php?id= inurl:product_ranges_view.php?ID= inurl:shop_category.php?id= inurl:transcript.php?id= inurl:channel_id= inurl:item_id= inurl:newsid= inurl:trainers.php?id= inurl:news-full.php?id= inurl:news_display.php?getid= inurl:index2. php?option= inurl:readnews.php?id= inurl:top10.php?cat= inurl:newsone.php?id= inurl:event.php?id= inurl:product-item.php?id= inurl:sql. php?id= inurl:aboutbook.php?id= inurl:preview.php?id= inurl:loadpsb.php?id= inurl:pages.php?id= inurl:material.php?id= inurl:clanek.php4? id= inurl:announce.php?id= inurl:chappies.php?id= inurl:read.php?id= inurl:viewapp.php?id= inurl:viewphoto.php?id= inurl:rub.php?idr= inurl:galeri_info.php?l= inurl:review.php?id= inurl:iniziativa.php?in= inurl:curriculum.php?id= inurl:labels.php?id= inurl:story.php?id= inurl: look.php? ID= inurl:newsone.php?id= inurl:aboutbook.php?id= inurl:material.php?id= inurl:opinions.php?id= inurl:announce.php?id= inurl:rub.php?idr= inurl:galeri_info.php?l= inurl:tekst.php?idt= inurl:newscat.php?id= inurl:newsticker_info.php?idn= inurl:rubrika.php?idr= inurl:rubp.php?idr= inurl: offer.php?idf= inurl:art.php?idm= inurl:title.php?id= inurl:".php?id=1" inurl:".php?cat=1" inurl:".php?catid= 1" inurl:".php?num=1" inurl:".php?bid=1" inurl:".php?pid=1" inurl:".php?nid=1"

here's a little snippet. You can use yours. And so, we found the site. For example http://www.vestitambov.ru/
Next, download this program

**Hidden Content: To see this hidden content your post count must be 3 or greater.**

Click OK. Then we insert the site of the victim.
We press start. Next, we are waiting for the results.
And so, the program found a SQL vulnerability.

Next, download Havij, http://www.vestitambov.ru:80/index.php?module=group_programs&id_gp= paste the resulting link there. I won’t explain how to use Havij and where to download it, it’s not difficult to find it. Everything. You have received the data you need - the administrator password, and then it's up to your imagination.

P.S. This is my first attempt at writing something. I'm sorry if it's wrong

This article should have been rewritten a long time ago (too much "savings on matches"), but hands do not reach. Let it weigh and remind us how stupid we are in our youth.
One of the main criteria for the success of any Internet resource is the speed of its work, and every year users become more and more demanding on this criterion. Optimization of php scripts is one of the methods to ensure the speed of the system.
In this article, I would like to present to the public my collection of tips and facts on script optimization. The collection was collected by me for a long time, based on several sources and personal experiments.
Why a collection of tips and facts and not hard and fast rules? Because, as I have seen, there is no “absolutely correct optimization”. Many techniques and rules are contradictory and it is impossible to fulfill them all. You have to choose a set of methods that are acceptable to use without sacrificing security and convenience. I have taken an advisory position and therefore I have advice and facts that you may or may not follow.
To avoid confusion, I divided all the tips and facts into 3 groups:

  • Code optimization
  • Useless optimization
The groups are allocated conditionally and some items can be attributed to several of them at once. The figures are given for the average server (LAMP). The article does not address issues related to the effectiveness of various third-party technologies and frameworks, as this is a topic for separate discussions.

Optimization at the level of logic and organization of the application

Many of the tips and facts related to this optimization group are very significant and give a very large gain in time.
  • Constantly profile your code on the server (xdebug) and on the client (firebug) to identify code bottlenecks
    It should be noted that you need to profile both the server and client parts, since not all server errors can be detected on the server itself.
  • The number of user-defined functions used in the program does not affect the speed in any way
    This allows you to use an infinite number of user-defined functions in the program.
  • Make active use of custom functions
    A positive effect is achieved due to the fact that inside the functions, operations are carried out only with local variables. The effect of this is greater than the cost of user-defined function calls.
  • It is desirable to implement “critically heavy” functions in a third-party programming language as a PHP extension
    This requires programming skills in a third-party language, which greatly increases the development time, but at the same time allows you to use tricks beyond the capabilities of PHP.
  • Processing a static html file is faster than an interpreted php file
    The difference in time on the client can be about 1 second, so a clear separation of static and PHP-generated pages makes sense.
  • The size of the processed (connected) file affects the speed
    Approximately 0.001 seconds are spent on processing every 2 KB. This fact pushes us to minimize the script code when transferring it to a production server.
  • Try not to constantly use require_once or include_once
    These functions should be used when it is possible to reread the file, in other cases it is desirable to use require and include .
  • When branching the algorithm, if there are constructions that may not be processed and their volume is about 4 KB or more, then it is more optimal to include them using include.
  • It is advisable to use validation of sent data on the client
    This is due to the fact that when validating data on the client side, the number of requests with incorrect data is drastically reduced. Client-side data validation systems are built primarily using JS and hard form elements (select).
  • It is desirable to build large DOM constructs for data arrays on the client
    This is very effective method optimizations when working with displaying a large amount of data. Its essence boils down to the following: an array of data is prepared on the server and transferred to the client, and the construction of DOM structures is provided to JS functions. As a result, the load is partially redistributed from the server to the client.
  • Systems built on AJAX technology are much faster than systems that do not use this technology.
    This is due to a decrease in withdrawal volumes and a redistribution of the load on the client. In practice, the speed of systems with AJAX is 2-3 times higher. Comment: AJAX, in turn, creates a number of restrictions on the use of other optimization methods, such as working with a buffer.
  • When receiving a post request, always return something, even a space
    Otherwise, an error page will be sent to the client, which weighs several kilobytes. This error very common in systems using AJAX technology.
  • Getting data from a file is faster than from a database
    This is largely due to the cost of connecting to the database. To my surprise, a huge percentage of programmers maniacally store all data in a database, even when using files is faster and more convenient. Comment: files can store data that is not searched, otherwise a database should be used.
  • Do not connect to the database unnecessarily
    For a reason unknown to me, many programmers connect to the database at the stage of reading the settings, although they may not make further queries to the database. This is a bad habit that costs an average of 0.002 seconds.
  • Use a persistent database connection with a small number of simultaneously active clients
    The benefit in time is caused by the absence of costs for connecting to the database. The time difference is approximately 0.002 seconds. Comment: with a large number of users, persistent connections are undesirable. When dealing with persistent connections, there must be a mechanism for terminating connections.
  • Using complex database queries is faster than using a few simple ones
    The time difference depends on many factors (data volume, database settings, etc.) and is measured in thousandths, and sometimes even hundredths, of a second.
  • Using DBMS side calculations is faster than PHP side calculations for data stored in the database
    This is due to the fact that for such calculations on the PHP side, two queries to the database are required (getting and changing data). The time difference depends on many factors (data volume, database settings, etc.) and is measured in thousandths and hundredths of a second.
  • If the sample data from the database rarely changes and many users access this data, then it makes sense to save the sample data to a file
    For example, you can use the following simple approach: we get the sample data from the database and save it as a serialized array to a file, then any user uses the data from the file. In practice, this optimization method can give a multiple increase in the speed of script execution. Comment: When using this method, it is required to write tools for generating and modifying the data stored in the file.
  • Cache data that rarely changes with memcached
    The gain in time can be quite significant. Comment: caching is effective for static data, for dynamic data the effect is reduced and may be negative.
  • Working without objects (no OOP) is about three times faster than working with objects
    Memory is "eaten up" also more. Unfortunately, the PHP interpreter can't handle OOP as fast as regular functions.
  • The higher the dimension of arrays, the slower they work.
    The loss of time arises from the processing of the nesting of structures.

Code optimization

These tips and facts give a slight increase in speed compared to the previous group, but taken together, these techniques can give a good gain in time.
  • echo and print are significantly faster than printf
    The time difference can be up to several thousandths of a second. This is because printf is used to output formatted data, and the interpreter checks the entire line for such data. printf is only used to output data that needs formatting.
  • echo $var."text" is faster than echo "$var."text"
    This is because the PHP engine in the second case is forced to look for variables inside the string. For large amounts of data and older versions of PHP, time differences are noticeable.
  • echo "a" is faster than echo "a" for strings without variables
    This is because in the second case the PHP engine is trying to find the variables. For large amounts of data, the differences in time are quite noticeable.
  • echo "a","b" is faster than echo "a"."b"
    Comma-separated data output is faster than dot-separated data. This is because string concatenation occurs in the second case. For large amounts of data, the differences in time are quite noticeable. Note: this only works with the echo function, which can take multiple lines as arguments.
  • $return="a"; $return.="b"; echo $return; faster than echo "a"; echo "b";
    The reason is that the data output requires some additional operations. For large amounts of data, the differences in time are quite noticeable.
  • ob_start(); echo "a"; echo "b"; ob_end_flush(); faster than $return="a"; $return.="b"; echo $return;
    This is because all the work is done without accessing variables. For large amounts of data, the differences in time are quite noticeable. Comment: this technique is not effective if you are working with AJAX, since in this case it is desirable to return the data as a single line.
  • Use "professional insert" or? > a b
    Static data (outside code) is processed faster than PHP output. This technique is called professional insertion. For large amounts of data, the differences in time are quite noticeable.
  • readfile is faster than file_get_contents , file_get_contents is faster than require , and require is faster than include to output static content from a single file
    The time to read an empty file fluctuates from 0.001 for readfile to 0.002 for include .
  • require is faster than include for interpreted files
    Comment: when branching the algorithm, when it is possible not to use the interpreted file, you must use include , because require always includes a file.
  • if (...) (...) else if (...) () is faster than switch
    The time depends on the number of branches.
  • if (...) (...) else if (...) () is faster than if (...) (...); if(...)();
    Time depends on the number of branches and conditions. You should use else if wherever possible, as it is the fastest "conditional" construct.
  • The most common conditions of the if (...) (...) else if (...) () construct should be placed at the beginning of the branch
    The interpreter scans the structure from top to bottom until it finds a condition. If the interpreter finds that the condition is met, then it does not look at the rest of the structure.
  • < x; ++$i) {...} быстрее, чем for($i = 0; $i < sizeOf($array); ++$i) {...}
    This is because in the second case, the sizeOf operation will be executed on each iteration. The execution time difference depends on the number of array elements.
  • x = sizeOf($array); for($i = 0; $i< x; ++$i) {...} быстрее, чем foreach($arr as $value) {...} для не ассоциативных массивов
    The time difference is significant and increases as the array grows.
  • preg_replace is faster than ereg_replace , str_replace is faster than preg_replace , but strtr is faster than str_replace
    The time difference depends on the amount of data and can reach several thousandths of a second.
  • String functions are faster than regular expressions
    This rule is a consequence of the previous one.
  • Delete already unnecessary array variables to free up memory.
  • Avoid using error suppression @
    Error suppression produces a number of very slow operations, and since the repetition rate can be very high, the speed loss can be significant.
  • if (isset($str(5))) (...) is faster than if (strlen($str)>4)(...)
    This is because the standard isset check operation is used instead of the string function strlen .
  • 0.5 is faster than 1/2
    The reason is that in the second case, a division operation is performed.
  • return is faster than global when returning the value of a variable from a function
    This is because in the second case a global variable is created.
  • $row["id"] is faster than $row
    The first option is 7 times faster.
  • $_SERVER['REQUEST_TIME'] is faster than time() for determining when a script should run
  • if ($var===null) (...) is faster than if (is_null($var)) (...)
    The reason is that there is no use of a function in the first case.
  • ++i is faster than i++ , --i is faster than i--
    This is due to a feature of the PHP core. The time difference is less than 0.000001, but if you have these procedures repeated thousands of times, then take a closer look at this optimization.
  • Increment of initialized variable i=0; ++i; faster than uninitialized ++i
    The time difference is about 0.000001 seconds, but due to the possible repetition frequency, this fact should be remembered.
  • Using "used" variables is faster than declaring new ones
    Or to rephrase it differently - Do not create unnecessary variables.
  • Working with local variables is faster than with global ones by about 2 times
    Although the time difference is less than 0.000001 seconds, but due to high frequency repetition should try to work with local variables.
  • Accessing a variable directly is faster than calling a function within which this variable is defined several times
    It takes about three times more time to call a function than to call a variable.

Useless optimization

A number of optimization methods in practice do not have a great impact on the speed of script execution (time gain is less than 0.000001 seconds). Despite this, such optimization is often the subject of controversy. I brought these "useless" facts so that you do not pay special attention to them when writing code later.
  • echo is faster than print
  • include("absolute path") is faster than include("relative path")
  • sizeOf is faster than count
  • foreach ($arr as $key => $value) (...) is faster than reset($arr); while (list($key, $value) = each ($arr)) (...) for associative arrays
  • Uncommented code is faster than commented code as it leaves Extra time to read a file
    It is very stupid to reduce the volume of comments for the sake of optimization, you just need to minimize it in working (“combat”) scripts.
  • Variables with short names are faster than variables with long names
    This is due to the reduction in the amount of code being processed. Similar to the previous point, you just need to minimize it in working (“combat”) scripts.
  • Code markup using tabs is faster than using spaces
    Similar to the previous point.
Finally, I want to remind you once again that the advice and facts I have given are not absolute and the significance of their application depends on the specific situation. It must be remembered that script optimization is only a small part of the entire optimization procedure and it is often possible to live without the above tips.

Materials were partially used to write the article.